Posts tagged neurons

Posts tagged neurons
Complex brain function depends on flexibility
Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.
However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.
“We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.
In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.
“You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.”
Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab.
Multitasking neurons
Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.
For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.
Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.
At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.
For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.
During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).
Expanding capacity
Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.
The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.
Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.
“Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.
Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.
“The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.”
Scientists identify molecular trigger for Alzheimer’s disease
Researchers have pinpointed a catalytic trigger for the onset of Alzheimer’s disease – when the fundamental structure of a protein molecule changes to cause a chain reaction that leads to the death of neurons in the brain.
For the first time, scientists at Cambridge’s Department of Chemistry, led by Dr Tuomas Knowles, Professor Michele Vendruscolo and Professor Chris Dobson working with Professor Sara Linse and colleagues at Lund University in Sweden have been able to map in detail the pathway that generates “aberrant” forms of proteins which are at the root of neurodegenerative conditions such as Alzheimer’s.
They believe the breakthrough is a vital step closer to increased capabilities for earlier diagnosis of neurological disorders such as Alzheimer’s and Parkinson’s, and opens up possibilities for a new generation of targeted drugs, as scientists say they have uncovered the earliest stages of the development of Alzheimer’s that drugs could possibly target.
The study, published today in the Proceedings of the US National Academy of Sciences, is a milestone in the long-term research established in Cambridge by Professor Christopher Dobson and his colleagues, following the realisation by Dobson of the underlying nature of protein ‘misfolding’ and its connection with disease over 15 years ago.
The research is likely to have a central role to play in diagnostic and drug development for dementia-related diseases, which are increasingly prevalent and damaging as populations live longer.
In 2010, the Alzheimer’s Research UK showed that dementia costs the UK economy over £23 billion, more than cancer and heart disease combined. Just last week, PM David Cameron urged scientists and clinicians to work together to “improve treatments and find scientific breakthroughs” to address “one of the biggest social and healthcare challenges we face.”
The neurodegenerative process giving rise to diseases such as Alzheimer’s is triggered when the normal structures of protein molecules within cells become corrupted.
Protein molecules are made in cellular ‘assembly lines’ that join together chemical building blocks called amino acids in an order encoded in our DNA. New proteins emerge as long, thin chains that normally need to be folded into compact and intricate structures to carry out their biological function.
Under some conditions, however, proteins can ‘misfold’ and snag surrounding normal proteins, which then tangle and stick together in clumps which build to masses, frequently millions, of malfunctioning molecules that shape themselves into unwieldy protein tendrils.
The abnormal tendril structures, called ‘amyloid fibrils’, grow outwards around the location where the focal point, or ‘nucleation’ of these abnormal “species” occurs.
Amyloid fibrils can form the foundations of huge protein deposits – or plaques – long-seen in the brains of Alzheimer’s sufferers, and once believed to be the cause of the disease, before the discovery of ‘toxic oligomers’ by Dobson and others a decade or so ago.
A plaque’s size and density renders it insoluble, and consequently unable to move. Whereas the oligomers, which give rise to Alzheimer’s disease, are small enough to spread easily around the brain - killing neurons and interacting harmfully with other molecules - but how they were formed was until now a mystery.
The new work, in large part carried out by researcher Samuel Cohen, shows that once a small but critical level of malfunctioning protein ‘clumps’ have formed, a runaway chain reaction is triggered that multiplies exponentially the number of these protein composites, activating new focal points through ‘nucleation’.
It is this secondary nucleation process that forges juvenile tendrils, initially consisting of clusters that contain just a few protein molecules. Small and highly diffusible, these are the ‘toxic oligomers’ that careen dangerously around the brain cells, killing neurons and ultimately causing loss of memory and other symptoms of dementia.
“There are no disease modifying therapies for Alzheimer’s and dementia at the moment, only limited treatment for symptoms. We have to solve what happens at the molecular level before we can progress and have real impact,” said Dr Tuomas Knowles from Cambridge’s Department of Chemistry, lead author of the study and long-time collaborator of Professor Dobson and Professor Michele Vendruscolo.
“We’ve now established the pathway that shows how the toxic species that cause cell death, the oligomers, are formed. This is the key pathway to detect, target and intervene – the molecular catalyst that underlies the pathology.”
The researchers brought together kinetic experiments with a theoretical framework based on master equations, tools commonly used in other areas of chemistry and physics but had not been exploited to their full potential in the study of protein malfunction before.
The latest research follows hard on the heels of another ground breaking study, published in April of this year again in PNAS, in which the Cambridge group, in Collaboration with Colleagues in London and at MIT, worked out the first atomic structure of one of the damaging amyloid fibril protein tendrils. They say the years spent developing research techniques are really paying off now, and they are starting to solve “some of the key mysteries” of these neurodegenerative diseases.
“We are essentially using a physical and chemical methods to address a biomolecular problem, mapping out the networks of processes and dominant mechanisms to ‘recreate the crime scene’ at the molecular root of Alzheimer’s disease,” explained Knowles.
“Increasingly, using quantitative experimental tools and rigorous theoretical analysis to understand complex biological processes are leading to exciting and game-changing results. With a disease like Alzheimer’s, you have to intervene in a highly specific manner to prevent the formation of the toxic agents. Now we’ve found how the oligomers are created, we know what process we need to turn off.”
The brain has been traditionally viewed as a deterministic machine where certain inputs give rise to certain outputs. However, there is a growing body of work that suggests this is not the case. The high importance of initial inputs suggests that the brain may be working in the realms of chaos, with small changes in initial inputs leading to the production of strange attractors. This may also be reflected in the physical structure of the brain which may also be fractal. EEG data is a good place to look for the underlying patterns of chaos in the brain since it samples many millions of neurons simultaneously. Several studies have arrived at a fractal dimension of between 5 and 8 for human EEG data. This suggests that the brain operates in a higher dimension than the 4 of traditional space-time. These extra dimensions suggest that quantum gravity may play a role in generating consciousness.
(Image courtesy: Kookmin University)
Temporal Processing in the Olfactory System: Can We See a Smell?
Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.
The breakthrough technique that allowed scientists to obtain one-of-a-kind, colorful images of the myriad connections in the brain and nervous system is about to get a significant upgrade.
A group of Harvard researchers, led by Joshua Sanes, the Jeff C. Tarr Professor of Molecular and Cellular Biology and Paul J. Finnegan Family Director, Center for Brain Science, and Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and Santíago Ramón y Cajal Professor of Arts and Sciences, has made a host of technical improvements in the “Brainbow” imaging technique. Their work is described in a May 5 paper in Nature Methods.
First described in 2007, the system combines three fluorescent proteins — one red, one blue, and one green — to label different cells with as many as 90 colors. By studying the resulting images, researchers were able to begin to understand how the millions of neurons in the brain are connected.
“‘Brainbow’ generated beautiful images of a kind we had never been able to obtain before, but it was difficult in some ways,” said Sanes, who also serves as director of the Center for Brain Science.
“These modifications aim to overcome some of the more problematic features of the original genetic constructs,” Lichtman said. “Lead author Dawen Cai, a research associate in our labs, worked hard and creatively to find ways to make the ‘Brainbow’ colors brighter, more variable, and useable in situations where the original gene constructs were hard to implement. Our first look at these animals suggests that these improvements are fantastic.”
Among the challenges faced by researchers using the original method, Sanes said, was the chance that certain colored proteins would bleach out faster than others.
“If one color bleaches faster than the others, you start with a ‘Brainbow,’ but by the time you’re done imaging, you might just have a ‘blue-bow,’ because the red and yellow bleach too fast,” he said.
Sanes said that some colors also were too dim, causing problems in the imaging process, while in other cases the protein didn’t fill the whole neuron evenly enough, or there was an overabundance of a certain color in an image.
“What we decided to do was to make the next generation of ‘Brainbow,’” Sanes said. “We systematically set out to look at these problems. We looked at a whole range of fluorescent proteins to find the ones that were brightest and wouldn’t bleach as much, and we developed new transgenic methods to avoid the predominance of a particular color.”
The researchers also explored new ways to create “Brainbow” images, including using viruses to introduce fluorescent proteins into cells.
The advantage of the new technique, Sanes said, is it offers researchers the chance to target certain parts of the brain and better understand how neurons radiate out to connect with other brain regions. Ultimately, he said, he hopes that other researchers are able to apply the techniques outlined in the paper in the same way that they expanded on the first “Brainbow” method.
“People adapted the method to study a number of interesting questions in other tissues to examine cellular relationships and cell lineages in kidney and skin cells,” he said. “It was also used to examine the nervous system in animals like zebrafish and C. elegans. With these new tools, I think we’ve taken the next step.”
Physicist’s tool has potential for brain mapping
A new tool being developed by UT Arlington assistant professor of physics could help scientists map and track the interactions between neurons inside different areas of the brain.
The journal Optics Letters recently published a paper by Samarendra Mohanty on the development of a fiber-optic, two-photon, optogenetic stimulator and its use on human cells in a laboratory. The tiny tool builds on Mohanty’s previous discovery that near-infrared light can be used to stimulate a light-sensitive protein introduced into living cells and neurons in the brain. This new method could show how different parts of the brain react when a linked area is stimulated.
The technology would be useful in the BRAIN mapping initiative recently championed by President Barack Obama, Mohanty said. BRAIN stands for Brain Research Through Advancing Innovative Neurotechnologies and will include $100 million in government investments in research.
“Scientists have spent a lot of time looking at the physical connections between different regions of the brain. But that information is not sufficient unless we examine how those connections function,” Mohanty said. “That’s where two-photon optogenetics comes into play. This is a tool not only to control the neuronal activity but to understand how the brain works.”
The two-photon optogenetic stimulation described in the Optics Letter paper involves introducing the gene for ChR2, a protein that responds to light, into a sample of excitable cells. A fiber-optic infrared beam of light can then be used to precisely excite the neurons in a tissue circuit.
In the brain, researchers would then observe responses in the excited area as well as other parts of the neural circuit. In living subjects, scientists would also observe the behavioral outcome, Mohanty said.
Optogenetic stimulation avoids damage to living tissue by using light to stimulate neurons instead of electric pulses used in past research. Mohanty’s method of using low-energy near-infrared light also enables more precision and a deeper focus than the blue or green light beams often used in optogenetic stimulation, the paper said.
Using fiber optics to deliver the two-photon optogenetic beam is another advance. Previous methods required bulky microscopes or complex scanning beams. Mohanty’s group is collaborating with UT Arlington Department of Psychology assistant professor Linda Perrotti to apply this technology in living animals.
“Dr. Mohanty’s innovations continue to be recognized because of the great potential they hold,” said Pamela Jansma, dean of the UT Arlington College of Science. “Hopefully, his work will one day provide researchers in other fields the tools they need to examine how the human body works and why normal processes sometimes fail.”
(Image: Shutterstock)
Serotonin Mediates Exercise-Induced Generation of New Neurons
Mice that exercise in running wheels exhibit increased neurogenesis in the brain. Crucial to this process is serotonin signaling. These are the findings of a study by Dr. Friederike Klempin, Daniel Beis and Dr. Natalia Alenina from the research group led by Professor Michael Bader at the Max Delbrück Center (MDC) Berlin-Buch. Surprisingly, mice lacking brain serotonin due to a genetic mutation exhibited normal baseline neurogenesis. However, in these serotonin-deficient mice, activity-induced proliferation was impaired, and wheel running did not induce increased generation of new neurons. (Journal of Neuroscience)
Scientists have known for some time that exercise induces neurogenesis in a specific brain region, the hippocampus. However, until this study, the underlying mechanism was not fully understood. The hippocampus plays an important role in learning and in memory and is one of the brain regions where new neurons are generated throughout life.
Serotonin facilitates precursor cell maturation
The researchers demonstrated that mice with the ability to produce serotonin are likely to release more of this hormone during exercise, which in turn increases cell proliferation of precursor cells in the hippocampus. Furthermore, serotonin seems to facilitate the transition of stem to progenitor cells that become neurons in the adult mouse brain.
For Dr. Klempin and Dr. Alenina it was surprising that normal baseline neurogenesis occurs in mice that, due to a genetic mutation, cannot produce serotonin in the brain. However, they noted that some of the stem cells in serotonin-deficient mice either die or fail to become neurons.
Yet, these animals seem to have a mechanism that allows compensation for the deficit, in that progenitor cells, an intermediate stage in the development from a stem cell to a neuron, divide more frequently. According to the researchers, this is to maintain the pool of these cells.
However, the group of wheel-running mice that do not produce serotonin did not exhibit an exercise-induced increase in neurogenesis. The compensatory mechanism failed following running. The researchers concluded: “Serotonin is not necessarily required for baseline generation of new neurons in the adult brain, but is essential for exercise-induced hippocampal neurogenesis.”
Hope for new approaches to treat depression and memory loss in the elderly
Deficiency in serotonin, popularly known as the “molecule of happiness”, has been considered in the context of theories linking major depression to declining neurogenesis in the adult brain. “Our findings could potentially help to develop new approaches to prevent and treat depression as well as age-related decline in learning and memory,” said Dr. Klempin and Dr. Alenina.

If you can’t beat them, join them: Grandmother cells revisited
In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.
The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.
The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.
To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.
When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.
As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.
Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?
Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.
Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.
In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.
Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.
In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.
Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.
Research Reveals Possible Reason for Cholesterol-Drug Side Effects
The U.S. Food and Drug Administration and physicians continue to document that some patients experience fuzzy thinking and memory loss while taking statins, a class of global top-selling cholesterol-lowering drugs.
A University of Arizona research team has made a novel discovery in brain cells being treated with statin drugs: unusual swellings within neurons, which the team has termed the “beads-on-a-string” effect.
The team is not entirely sure why the beads form, said UA neuroscientist Linda L. Restifo, who leads the investigation. However, the team believes that further investigation of the beads will help inform why some people experience cognitive declines while taking statins.
"What we think we’ve found is a laboratory demonstration of a problem in the neuron that is a more severe version for what is happening in some peoples’ brains when they take statins," said Restifo, a UA professor of neuroscience, neurology and cellular and molecular medicine, and principal investigator on the project.
Restifo and her team’s co-authored study and findings recently were published in Disease Models & Mechanisms, a peer-reviewed journal. Robert Kraft, a former research associate in the department of neuroscience, is lead author on the article.
Restifo and Kraft cite clinical reports noting that statin users often are told by physicians that cognitive disturbances experienced while taking statins were likely due to aging or other effects. However, the UA team’s research offers additional evidence that the cause for such declines in cognition is likely due to a negative response to statins.
The team also has found that removing statins results in a disappearance of the beads-on-a-string, and also a restoration of normal growth.
With research continuing, the UA team intends to investigate how genetics may be involved in the bead formation and, thus, could cause hypersensitivity to the drugs in people. Team members believe that genetic differences could involve neurons directly, or the statin interaction with the blood-brain barrier.
"This is a great first step on the road toward more personalized medication and therapy," said David M. Labiner, who heads the UA department of neurology. "If we can figure out a way to identify patients who will have certain side effects, we can improve therapeutic outcomes."
For now, the UA team has multiple external grants pending, and researchers carry the hope that future research will greatly inform the medical community and patients.
"If we are able to do genetic studies, the goal will be to come up with a predictive test so that a patient with high cholesterol could be tested first to determine whether they have a sensitivity to statins," Restifo said.
Detecting, Understanding a Drugs’ Side Effects
Restifo used the analogy of traffic to explain what she and her colleagues theorize.
The beads indicate a sort of traffic jam, she described. In the presence of statins, neurons undergo a “dramatic change in their morphology,” said Restifo, also a BIO5 Institute member.
"Those very, very dramatic and obvious swellings are inside the neurons and act like a traffic pileup that is so bad that it disrupts the function of the neurons," she said.
It was Kraft’s observations that led to team’s novel discovery.
Restifo, Kraft and their colleagues had long been investigating mutations in genes, largely for the benefit of advancing discoveries toward the improved treatment of autism and other cognitive disorders.
At the time, and using a blind-screened library of 1,040 drug compounds, the team ran tests on fruit fly neurons, investigating the reduction of defects caused by a mutation when neurons were exposed to different drugs.
The team had shown that one mutation caused the neuron branches to be curly instead of straight, but certain drugs corrected this. The research findings were published in 2006 in the Journal of Neuroscience.
Then, something serendipitous occurred: Kraft observed that one compound, then another and then two more all created the same reaction – “these bulges, which we called beads-on-a-string,’” Kraft said. “And they were the only drugs causing this effect.”
At the end of the earlier investigation, the team decoded the library and found that the four compounds that resulted in the beads-on-a-string were, in fact, statins.
"The ‘beads’ effect of the statins was like a bonus prize from the earlier experiment," Restifo said. "It was so striking, we couldn’t ignore it."
In addition to detecting the beads effect, the team came upon yet another major finding: when statins are removed, the beads-on-a-string effect disappears, offering great promise to those being treated with the drugs.
"For some patients, just as much as statins work to save their lives, they can cause impairments," said Monica Chaung, who has been part of the team and is a UA undergraduate researcher studying molecular and cellular biology and physiology.
"It’s not a one drug fits all," said Chaung, a UA junior who is also in the Honors College. "We suspect different gene mutations alter how people respond to statins."
Having been trained by Kraft in techniques to investigate cultured neurons, Chuang was testing gene mutations and found variation in sensitivity to statins. It was through the work of Chuang and Kraft that the team would later determine that, after removing the statins, the cells were able to repair themselves; the neurotoxicity was not permanent, Restifo said.
"In the clinical literature, you can read reports on fuzzy thinking, which stops when a patient stops taking statins. So, that was a very important demonstration of a parallel between the clinical reports and the laboratory phenomena," Restifo said.
The finding led the team to further investigate the neurotoxicity of statins.
"There is no question that these are very important and very useful drugs," Restifo said. Statins have been shown to lower cholesterol and prevent heart attacks and strokes.
But too much remains unknown about how the drugs’ effects may contribute to muscular, cognitive and behavioral changes.
"We don’t know the implications of the beads, but we have a number of hypotheses to test," Restifo said, adding that further studies should reveal exactly what happens when the transportation system within neurons is disrupted.
Also, given the move toward prescribing statins to children, the need to have an expanded understanding of the effects of statins on cognitive development is critical, Kraft said.
"If statins have an effect on how the nervous system matures, that could be devastating," Kraft said. "Memory loss or any sort of disruption of your memory and cognition can have quite severe effects and negative consequences."
Restifo and her colleagues have multiple grants pending that would enable the team to continue investigating several facets related to the neurotoxicity of statins. Among the major questions is, to what extent does genetics contribute to a person’s sensitivity to statins?
"We have no idea who is at risk. That makes us think that we can use this genetic laboratory assay to infer which of the genes make people susceptible," Restifo said.
"This dramatic change in the morphology of the neurons is something we can now use to ask questions and experiment in the laboratory," she said. "Our contribution is to find a way to ask about genetics and what the genetic vulnerability factors are."
The Possibility for Future Research, Advice
The team’s findings and future research could have important implications for the medical field and for patients with regard to treatment, communication and improved personalized medicine.
"It’s important to look into this to see if people may have some sort of predisposition to the beads effect, and that’s where we want to go with this research," Kraft said. "There must be more research into what effects these drugs have other than just controlling a person’s elevated cholesterol levels."
And even as additional research is ongoing, suggestions already exist for physicians, patients and families.
"Most physicians assume that if a patient doesn’t report side effects, there are no side effects," Labiner said.
"The paternalistic days of medication are hopefully behind us. They should be," Labiner said.
"We can treat lots of things, but the problem is if there are side effects that worsen the treatment, the patient is more likely to shy away from the medication. That’s a bad outcome," he said. "There’s got to be a give and take between the patient and physician."
Patients should feel empowered to ask questions, and deeper questions, about their health and treatment and physicians should be very attentive to any reports of cognitive decline for those patients on statins, she said.
For some, it starts early after starting statins; for others, it takes time. And the signs vary. People may begin losing track of dates, the time or their keys.
"These are not trivial things. This could have a significant impact on your daily life, your interpersonal relationships, your ability to hold a job," Restifo said.
"This is the part of the brain that allows us to think clearly, to plan, to hold onto memories," she said. "If people are concerned that they are having this problem, patients should ask their physicians."
Restifo said open and direct patient-physician communication is even more important for those on statins who have a family history of side effects from statins.
Also, physicians could work more closely with patients to investigate family history and determine a better dosage plan. Even placing additional questions on the family history questionnaire could be useful, she said.
"There is good clinical data that every-other-day dosing give you most of the benefits, and maybe even prevents some of the accumulation of things that result in side effects," Restifo said, suggesting that physicians should try and get a better longitudinal picture on how people react while on statins.
"Statins have been around now for long enough and are widely prescribed to so many people," she said. "But increased awareness could be very helpful."
When brain cells are overwhelmed by an influx of too many calcium molecules, they shut down the channels through which these molecules enter the cells. Until now, the “stop” signal mechanism that cells use to control the molecular traffic was unknown.
In the new issue of the journal Neuron, UC Davis Health System scientists report that they have identified the mechanism. Their findings are relevant to understanding the molecular causes of the disruption of brain functioning that occurs in stroke and other neurological disorders.
"Too much calcium influx clearly is part of the neuronal dysfunction in Alzheimer’s disease and causes the neuronal damage during and after a stroke. It also contributes to chronic pain," said Johannes W. Hell, professor of pharmacology at UC Davis. Hell headed the research team that identified the mechanism that stops the flow of calcium molecules, which are also called ions, into the specialized brain cells known as neurons.
Hell explained that each day millions of molecules of calcium enter and exit each of the 100 billion neurons of the human brain. These calcium ions move in and out of neurons through pore-like structures, known as channels, that are located in the outer surface, or “skin,” of each cell.
The flow of calcium ions into brain cells generates the electrical impulses needed to stimulate such actions as the movement of muscles in our legs and the creation of new memories in the brain. The movement of calcium ions also plays a role in gene expression and affects the flexibility of the structures, called synapses, that are located between neurons and transmit electrical or chemical signals of various strengths from one cell to a second cell.
Neurons employ an unexpected and highly complex mechanism to down regulate, or reduce, the activity of channels that are permitting too many calcium ions to enter neurons, Hell and his colleagues discovered. The mechanism, which leads to the elimination of the overly permissive ion channel employs two proteins, α-actinin and the calcium-binding messenger protein calmodulin.
Located on the neuron’s outer surface, referred to as the plasma membrane, α-actinin stabilizes the type of ion channels that constitute a major source of calcium ion influx into brain cells, Hell explained. This protein is a component of the cytoskeleton, the scaffolding of cells. The ion channels that are a major source of calcium ions are referred to as Cav1.2 (L type voltage-dependent calcium channels).
The researchers also found that the calcium-binding messenger protein calmodulin, which is the cell’s main sensor for calcium ions, induces internalization, or endocytosis, of Cav1.2 to remove this channel from the cell surface, thus providing an important negative feedback mechanism for excessive calcium ion influx into a neuron, Hell explained.
The discovery that α-actinin and calmodulin play a role in controlling calcium ion influx expands upon Hell’s previous research on the molecular mechanisms that regulate the activity of various ion channels at the synapse.
One previous study proved relevant to understanding the biological mechanisms that underlie the body’s fight-or-flight response during stress.
In work published in the journal Science in 2001, Hell and colleagues reported that the regulation of Cav1.2 by adrenergic signaling during stress is performed by one of the adrenergic receptors (beta 2 adrenergic receptor) directly linked to Cav1.2.
"This protein-protein interaction ensures that the adrenergic regulation is fast, efficient and precisely targets this channel," Hell said.
"We showed that Cav1.2 is regulated by adrenergic signaling on a time scale of a few seconds, and this is mainly increasing its activity when needed, for example during danger, to make our brain work faster and better. The same channel is in the heart, where adrenergic stimulation increases channel/Ca influx activity, increasing the pacing and strength of our heart beat to meet the increased physical demands during danger."
(Source: universityofcalifornia.edu)