Posts tagged neurons

Posts tagged neurons
Creating induced pluripotent stem cells or iPS cells allows researchers to establish “disease in a dish” models of conditions ranging from Alzheimer’s disease to diabetes. Scientists at Yerkes National Primate Research Center, Emory University have now applied the technology to a model of Huntington’s disease (HD) in transgenic nonhuman primates, allowing them to conveniently assess the efficacy of potential therapies on neuronal cells in the laboratory.

(Image caption: Neural progenitor cells derived from transgenic rhesus macaque iPS cells show features of Huntington’s disease pathology, making them a useful tool for therapeutic discovery.)
The results were published this week in Stem Cell Reports.
"A highlight of our model is that our progenitor cells and neurons developed cellular features of HD such as intranuclear inclusions of mutant Huntingtin protein, which most of the currently available cell models do not present," says senior author Anthony Chan, PhD, DVM, associate professor of human genetics at Emory University School of Medicine and Yerkes National Primate Research Center. "We could use these features as a readout for therapy using drugs or a genetic manipulation."
Chan and his colleagues were the first in the world to establish a transgenic nonhuman primate model of HD. HD is an inherited neurodegenerative disorder that leads to the appearance of uncontrolled movements and cognitive impairments, usually in adulthood. It is caused by a mutation that introduces an expanded region where one amino acid (glutamine) is repeated dozens of times in the huntingtin protein.
The non-human primate model has extra copies of the huntingtin gene that contains the expanded glutamine repeats. In the non-human primate model, motor and cognitive deficits appear more quickly than in most cases of Huntington’s disease in humans, becoming noticeable within the first two years of the monkeys’ development.
First author Richard Carter, PhD, a graduate of Emory’s Genetics and Molecular Biology doctoral program, and his colleagues created iPS cells from the transgenic monkeys by reprogramming cells derived from the skin or dental pulp. This technique uses retroviruses to introduce reprogramming factors into somatic cells and induces a fraction of them to become pluripotent stem cells. Pluripotent stem cells are able to differentiate into any type of cell in the body, under the right conditions.
Carter and colleagues induced the iPS cells to become neural progenitor cells and then differentiated neurons. The iPS-derived neural cells developed intracellular and intranuclear aggregates of the mutant huntingtin protein, a classic sign of Huntington’s pathology, as well as an increased sensitivity to oxidative stress.
The sensitivity to oxidative stress was a useful indicator; it could be ameliorated in cell culture, either by a RNA-based gene knockdown approach, or the drug memantine, which is currently being investigated for Huntington’s disease in a human clinical trial.
"We tested two known experimental interventions, but our findings are a proof of principle that this system could be a valuable tool for the discovery and evaluation of other therapies," Chan says.
(Source: news.emory.edu)

The Yin and Yang of Overcoming Cocaine Addiction
Yaoying Ma says that biology, by nature, has a yin and a yang—a push and a pull.
Addiction, particularly relapse, she finds, is no exception.
Ma is a research associate in the lab of Yan Dong, assistant professor of neuroscience in the University of Pittsburgh’s Kenneth P. Dietrich School of Arts and Sciences. She is the lead author of a paper published online today in the journal Neuron that posits that it may be possible to ramp up an intrinsic anti-addiction response as a means to fight cocaine relapse and keep the wolves of relapse at bay.
This paper is the first to establish the existence of a brain circuitry that resists a relapse of cocaine use through a naturally occurring neural remodeling with “silent synapses.”
The work is a follow-up on a recent study conducted by Dong and his colleagues, which was published in Nature Neuroscience last November. The team used rat models to examine the effects of cocaine self-administration and withdrawal on nerve cells in the nucleus accumbens, a small region in the brain that is commonly associated with reward, emotion, motivation, and addiction. Specifically, they investigated the roles of synapses—the structures at the ends of nerve cells that relay signals.
The team reported in its Nature Neuroscience study that when a rat uses cocaine, some immature synapses are generated, which are called “silent synapses” because they are semifunctional and send few signals under normal physiological conditions. After that rat stops using cocaine, these “silent synapses” go through a maturation phase and acquire their full function to send signals. Once they can send signals, the synapses will send craving signals for cocaine if the rat is exposed to cues previously associated with the drug.
The current Neuron paper shows that there’s another side of “silent synapse” remodeling. Silent synapses that are generated in a specific cortical projection to the nucleus accumbens by cocaine exposure become “unsilenced” after cocaine withdrawal, resulting in a profound remodeling of this cortical projection. Additional experiments show that silent synapse-based remodeling of this cortical projection decreases cocaine craving. Importantly, this anti-relapse circuitry remodeling is induced by cocaine exposure itself, suggesting that our body has its own way to fight addiction.
Dong, the paper’s senior author, says that the pro-relapse response is predominant after cocaine exposure. But since the anti-relapse response exists inside the brain, it could possibly be clinically tweaked to achieve therapeutic benefits.
Ma notes that this finding “may provide insight into ways to manipulate this yin-yang balance and hopefully provide new neurobiological targets for interventions designed to decrease relapse.”
“The story won’t stop here,” Ma adds. “Our ongoing study is exploring some unusual but simple ways to beef up the endogenous anti-relapse mechanism.”
(Image: PA)
Despite the barrage of visual information the brain receives, it retains a remarkable ability to focus on important and relevant items. This fall, for example, NFL quarterbacks will be rewarded handsomely for how well they can focus their attention on color and motion – being able to quickly judge the jersey colors of teammates and opponents and where they’re headed is a valuable skill. How the brain accomplishes this feat, however, has been poorly understood.

Now, University of Chicago scientists have identified a brain region that appears central to perceiving the combination of color and motion. They discovered a unique population of neurons that shift in sensitivity toward different colors and directions depending on what is being attended – the red jersey of a receiver headed toward an end zone, for example. The study, published Sept. 4 in the journal Neuron, sheds light on a fundamental neurological process that is a key step in the biology of attention.
“Most of the objects in any given visual scene are not that important, so how does the brain select or attend to important ones?” said study senior author David Freedman, PhD, associate professor of neurobiology at the University of Chicago. “We’ve zeroed in on an area of the brain that appears central to this process. It does this in a very flexible way, changing moment by moment depending on what is being looked for.”
The visual cortex of the brain possesses multiple, interconnected regions that are responsible for processing different aspects of the raw visual signal gathered by the eyes. Basic information on motion and color are known to route through two such regions, but how the brain combines these streams into something usable for decision-making or other higher-order processes remained unclear.
To investigate this process, Freedman and postdoctoral fellow Guilhem Ibos, PhD, studied the response of individual neurons during a simple task. Monkeys were shown a rapid series of visual images. An initial image showed either a group of red dots moving upwards or yellow dots moving downwards, which served as an instruction for which specific colors and directions were relevant during that trial. The subjects were rewarded when they released a lever when this image later reappeared. Subsequent images were composed of different colors of dots moving in different directions, among which was the initial image.
Dynamic neurons
Freedman and Ibos looked at neurons in the lateral intraparietal area (LIP), a region highly interconnected with brain areas involved in vision, motor control and cognitive functions. As subjects performed the task and looked for a specific combination of color and motion, LIP neurons became highly active. They did not respond, however, when the subjects passively viewed the same images without an accompanying task.
When the team further investigated the responses of LIP neurons, they discovered that the neurons possessed a unique characteristic. Individual neurons shifted their sensitivity to color and direction toward the relevant color and motion features for that trial. When the subject looked for red dots moving upwards, for example, a neuron would respond strongly to directions close to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.
“Shifts in feature tuning had been postulated a long time ago by theoretical studies,” Ibos said. “This is the first time that neurons in the brain have been shown to shift their selectivity depending on which features are relevant to solve a task.”
Freedman and Ibos developed a model for how the LIP brings together both basic color and motion information. Attention likely affects that process through signals from higher-order areas of the brain that affect LIP neuron selectivity. The team believes that this region plays an important role in making sense of basic sensory information, and they are trying to better understand the brain-wide neuronal circuitry involved in this process.
“Our study suggests that this area of the brain brings together information from multiple areas throughout the brain,” Freedman said. “It integrates inputs – visual, motor, cognitive inputs related to memory and decision making – and represents them in a way that helps solve the task at hand.”
(Source: newswise.com)
Stanford scientists reveal complexity in the brain’s wiring diagram
When Joanna Mattis started her doctoral project she expected to map how two regions of the brain connect. Instead, she got a surprise. It turns out the wiring diagram shifts depending on how you flip the switch.
"There’s a lot of excitement about being able to make a map of the brain with the idea that if we could figure out how it is all connected we could understand how it works," Mattis said. "It turns out it’s so much more dynamic than that."
Mattis is a co-first author on a paper describing the work published August 27 in the Journal of Neuroscience. Julia Brill, then a postdoctoral scholar, was the other co-first author.
Mattis had been a graduate student in the lab of Karl Deisseroth, professor of bioengineering and of psychiatry and behavioral sciences, where she helped work on a new technique called optogenetics. That technique allows neuroscientists to selectively turn parts of the brain on and off to see what happens. She wanted to use optogenetics to understand the wiring of a part of the brain involved in spatial memory – it’s what makes a mental map of your surroundings as you explore a new city, for example.
Scientists already knew that when an animal explores habitats, two parts of the brain are involved in the initial exploring phase and then in solidifying a map of the environment – the hippocampus and the septum.
When an animal is exploring an environment, the neurons in the hippocampus fire slow signals to the septum, essentially telling the septum that it’s busy acquiring information. Once the animal is done exploring, those same cells fire off intense signals letting the septum know that it’s now locking that information into memory. The scientists call this phase consolidation. The septum uses that information to then turn around and regulate other signals going into the hippocampus.
"I wanted to study the hippocampus because on the one hand so much was already known – there was already this baseline of knowledge to work off of. But then the question of how the hippocampus and septum communicate hadn’t been accessible before optogenetics," Mattis said.
Neurons in the hippocampus were known to fire in a rhythmic pattern, which is a particular expertise of John Huguenard, a professor of neurology. Mattis obtained an interdisciplinary fellowship through Stanford Bio-X, which allowed her to combine the Deisseroth lab’s expertise in optogenetics with the rhythmic brain network expertise of Julia Brill from the Huguenard lab.
Mattis and Brill used optogenetics to prompt neurons of the hippocampus to mimic either the slow firing characteristic of information acquisition or the rapid firing characteristic of consolidation. When they mimicked the slow firing they saw a quick reaction by cells in the septum. When they mimicked the fast consolidation firing, they saw a much slower response by completely different cells in the septum.
Same set of wires – different outcome. That’s like turning on different lights depending on how hard you flip the switch. “This illustrates how complex the brain is,” Mattis said.
Most scientific papers answer a question: What does this protein do? How does this part of the brain work? By contrast, this paper raised a whole new set of questions, Mattis said. They more or less understand the faster reaction, but what is causing the slower reaction? How widespread is this phenomenon in the brain?
"The other big picture thing that we opened up but didn’t answer is: How can you then tie this back to the circuit overall and learning memory?" Mattis said. "Those would be exciting things to follow up on for future projects."

(Image caption: A thalamocortical, or TC neuron labeled with fluorescent dye, as used in Dr. Augustinaite’s study. The image shows a voltage recording device, at bottom left, entering the yellow cell body, and a stimulation device, at top, reaching the dendrites. Color in this image shows the depth in the slice.)
The brain is a complicated network of small units called neurons, all working to carry information from the outside world, create an internal model, and generate a response. Neurons sense a signal through branching dendrites, carry this signal to the cell body, and send it onwards through a long axon to signal the next neuron. However, neurons can function in many different ways; some of which researchers are still exploring. Some signals that the dendrites receive do not continue to the next neuron; instead they seem to change the way that the neuron handles the subsequent signals. This could help neurons function as part of a large network, but researchers still have many questions. Dr. Sigita Augustinaite, a researcher in the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University, suggested one mechanism explaining how neurons help the network function. Her findings, part of collaboration between the University of Oslo and OIST, were published August 13, 2014 as the cover article in The Journal of Neuroscience.
Dr. Augustinaite studies the visual pathway, where signals from the retina are sent to the visual cortex, where the brain interprets signals from the eye. Between the eye and the visual cortex, the signals must pass through the visual thalamus, that is, through thalamocortical, or TC neurons. These neurons can switch between a “sleeping” state and a “waking” state depending on input they receive from neurons and other brain areas. When an animal is awake, TC neurons transmit the incoming retinal signals on to the cortex, but when the animal is asleep, the neurons block retinal signals.
The visual cortex also sends a massive input back to TC neurons to control retinal signals traveling through the thalamus. But Dr. Augustinaite says that the suggested mechanisms of this control bring more questions than answers. To understand more, she conducted experiments in acute brain slices, small pieces of brain tissue where neurons stay alive and maintain their physiological properties. She added glutamate to dendrites far from the cell body to emulate a feedback signal from the visual cortex. Then she measured the neuron’s response, shown as a voltage difference between inside and outside of the membrane.
Dr. Augustinaite found that stimulating the neurons in this way depolarizes their membranes, creating something called NMDA spike/plateau potentials. If strong enough, depolarization can cause a neuron to fire an action potential, which travels through the axon to activate other neurons. Action potentials look like a sharp, one-millisecond increase in membrane voltage, and they transmit signals from retina to cortex. But if NMDA spike/plateaus induces action potentials, signals from the cortex and signals from the retina would be indistinguishable. With her experiments, Dr. Augustinaite showed that the NMDA spike/plateau potentials in TC neurons do not trigger action potentials. Instead, they lift the voltage of the membrane, changing the neuron’s properties for few hundred milliseconds, creating conditions for reliable signal transmission from retina to cortex.
“The research gives, for the first time, a clear view on what dendritic potentials are good for,” explained Prof. Bernd Kuhn, who leads the lab where Dr. Augustinaite works. “It points directly to the mechanism,” he concluded. Showing how dendritic plateaus function is just one important step toward understanding how neurons function as a network. “This mechanism could also be used in many other neuronal circuits, where one input regulates how another input moves through the network,” Dr. Augustinaite said. “This mechanism is an exciting logical element in the neuronal network, but just the start of putting the puzzle together.”
(Image caption: A consensus shape for the calcium ion channel in the worm’s pain receptor nerve that was reached by computer modeling. Credit: Damian van Rossum and Andriy Anishkin, Duke University)
Surprising New Role for Calcium in Sensing Pain
When you accidentally touch a hot oven, you rapidly pull your hand away. Although scientists know the basic neural circuits involved in sensing and responding to such painful stimuli, they are still sorting out the molecular players.
Duke researchers have made a surprising discovery about the role of a key molecule involved in pain in worms, and have built a structural model of the molecule. These discoveries, described Sept. 2 in Nature Communications, may help direct new strategies to treat pain in people.
In humans and other mammals, a family of molecules called TRP ion channels plays a crucial role in nerve cells that directly sense painful stimuli. Researchers are now blocking these channels in clinical trials to evaluate this as a possible treatment for various types of pain.
The roundworm Caenorhabditis elegans also expresses TRP channels — one of which is called OSM-9 — in its single head pain-sensing neuron (which is similar to the pain-sensing nerve cells for the human face). OSM-9 is not only vital for detecting danger signals in the tiny worms, but is also a functional match to TRPV4, a mammalian TRP channel involved in sensing pain.
In the new study, researchers created a series of genetic mutant worms in which parts of the OSM-9 channel were disabled or replaced and then tested the engineered worms’ reactions to overly salty solution, which is normally aversive and painful.
Specifically, the mutant worms had alterations in the pore of the OSM-9 channels in their pain-sensing neuron, which gets fired up upon channel activation to allow calcium and sodium to flow into the neuron. That, in turn, was thought to switch on the neural circuit that encodes rapid withdrawal behavior — like pulling the finger from the stove.
“People strongly believed that calcium entering the cell through the TRP channel is everything in terms of cellular activation,” said lead author Wolfgang Liedtke, M.D., Ph.D., an associate professor of neurology, anesthesiology and neurobiology at Duke University School of Medicine and an attending physician in the Duke Pain Clinics, where he sees patients with chronic head-neck and face-pain.
With then-graduate student Amanda Lindy, “we wanted to systemically mutagenize the OSM-9 pore and see what we could find in the live animal, in its pain behavior,” Liedtke said.
To the group’s surprise, changing various bits of OSM-9’s pore did not change most of the mutant worms’ reactions to the salty solution. However, these mutations did affect the flow of calcium into the cell. The disconnect they saw suggested the calcium was not playing a direct role in the worms’ avoidance of danger signals.
Calcium has been thought to be indispensable for pain behavior — not only in worms’ channels but in pain-related TRP channels in mammals. So results from the engineered OSM-9 mutant worms will change a central concept for the understanding of pain, Liedtke said.
To see whether calcium might instead play a role in the worms’ ability to adapt to repeated painful stimuli, the group then repeatedly exposed pore-mutant worms to the aversive and pain stimuli.
After the tenth trial, a normal worm becomes less sensitive to high salt. But one mutant worm with a minimal change to one specific part of its OSM-9 pore — altered so that calcium no longer entered but sodium did — was just as sensitive on the tenth trial as on the first.
The results confirmed that calcium flow through the channel makes the worms more adaptable to painful stimuli; it helps them cope with the onslaught by desensitizing them. This could well represent a survival advantage, Liedtke said.
To put the findings into a structural context, Liedtke collaborated with computational protein scientists Damian van Rossum and Andriy Anishkin from Penn State University, who built a structural model of OSM-9 that was based on established structures of several of the channel’s relatives, including the recently resolved structure of TRPV1, the molecule that senses pain caused by heat and hot chili peppers.
The team was then able to visualize the key parts of the OSM-9 pore in the context of the entire channel. They understood better how the pore holds its shape and allows sodium and calcium to pass.
Liedtke said that understanding this structure could be a great help in designing compounds that will not completely block the channel but will just prevent calcium from entering the cell. Although calcium helps desensitize worms to painful stimuli in the near term, it might set up chronic, pathological pain circuits in the long term, Liedtke said.
So, as a next step, the group plans to assess the longer-term effects calcium flow has in pain neurons. For example, calcium could change the expression of particular genes in the sensory neuron. And such gene expression changes could underlie chronic, pathologic pain.
“We assume, and so far the evidence is quite good, that chronic, pathological pain has to do with people’s genetic switches in their sensory system set in the wrong way, long term. That’s something our new worm model will now allow us to approach rationally by experimentation,” Liedtke said.
Neurons in human skin perform advanced calculations
A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.

When we learn, we associate a sensory experience either with other stimuli or with a certain type of behaviour. The neurons in the cerebral cortex that transmit the information modify the synaptic connections that they have with the other neurons. According to a generally-accepted model of synaptic plasticity, a neuron that communicates with others of the same kind emits an electrical impulse as well as activating its synapses transiently. This electrical pulse, combined with the signal received from other neurons, acts to stimulate the synapses. How is it that some neurons are caught up in the communication interplay even when they are barely connected? This is the crucial chicken-or-egg puzzle of synaptic plasticity that a team led by Anthony Holtmaat, professor in the Department of Basic Neurosciences in the Faculty of Medicine at UNIGE, is aiming to solve. The results of their research into memory in silent neurons can be found in the latest edition of Nature.
Learning and memory are governed by a mechanism of sustainable synaptic strengthening. When we embark on a learning experience, our brain associates a sensory experience either with other stimuli or with a certain form of behaviour. The neurons in the cerebral cortex responsible for ensuring the transmission of the relevant information, then modify the synaptic connections that they have with other neurons. This is the very arrangement that subsequently enables the brain to optimise the way information is processed when it is met again, as well as predicting its consequences.
Neuroscientists typically induce electrical pulses in the neurons artificially in order to perform research on synaptic mechanisms.
The neuroscientists from UNIGE, however, chose a different approach in their attempt to discover what happens naturally in the neurons when they receive sensory stimuli. They observed the cerebral cortices of mice whose whiskers were repeatedly stimulated mechanically without an artificially-induced electrical pulse. The rodents use their whiskers as a sensor for navigating and interacting; they are, therefore, a key element for perception in mice.
An extremely low signal is enough
By observing these natural stimuli, professor Holtmaat’s team was able to demonstrate that sensory stimulus alone can generate long-term synaptic strengthening without the neuron discharging either an induced or natural electrical pulse. As a result – and contrary to what was previously believed – the synapses will be strengthened even when the neurons involved in a stimulus remain silent.In addition, if the sensory stimulation lasts over time, the synapses become so strong that the neuron in turn is activated and becomes fully engaged in the neural network. Once activated, the neuron can then further strengthen the synapses in a forwards and backwards movement. These findings could solve the brain’s “What came first?” mystery, as they make it possible to examine all the synaptic pathways that contribute to memory, rather than focusing on whether it is the synapsis or the neuron that activates the other.
The entire brain is mobilised
A second discovery lay in store for the researchers. During the same experiment, they were also able to establish that the stimuli that were most effective in strengthening the synapses came from secondary, non-cortical brain regions rather than major cortical pathways (which convey actual sensory information). Accordingly, storing information would simply require the co-activation of several synaptic pathways in the neuron, even if the latter remains silent. These findings may also have important implications both for the way we understand learning mechanisms and for therapeutic possibilities, in particular for rehabilitation following a stroke or in neurodegenerative disorders. As professor Holtmaat explains: “It is possible that sensory stimulation, when combined with another activity (motor activity, for example), works better for strengthening synaptic connections”. The professor concludes: “In the context of therapy, you could combine two different stimuli as a way of enhancing the effectiveness.”
New York University biologists have identified a mechanism that helps explain how the diversity of neurons that make up the visual system is generated.

“Our research uncovers a process that dictates both timing and cell survival in order to engender the heterogeneity of neurons used for vision,” explains NYU Biology Professor Claude Desplan, the study’s senior author.
The study’s other co-authors were: Claire Bertet, Xin Li, Ted Erclik, Matthieu Cavey, and Brent Wells—all postdoctoral fellows at NYU.
Their work, which appears in the latest issue of the journal Cell, centers on neurogenesis—the process by which neurons are created.
A central challenge in developmental neurobiology is to understand how progenitors—stem cells that differentiate to form one or more kinds of cells—produce the vast diversity of neurons, glia, and non-neuronal cells found in the adult Central Nervous System (CNS). Temporal patterning is one of the core mechanisms generating this diversity in both invertebrates and vertebrates. This process relies on the sequential expression of transcription factors into progenitors, each specifying the production of a distinct neural cell type.
In the Cell paper, the researchers studied the formation of the visual system of the fruit fly Drosophila. Their findings revealed that this process, which relies on temporal patterning of neural progenitors, is more complex than previously thought.
They demonstrate that in addition to specifying the production of distinct neural cell type over time, temporal factors also determine the survival or death of these cells as well as the mode of division of progenitors. Thus, temporal patterning of neural progenitors generates cell diversity in the adult visual system by specifying the identity, the survival, and the number of each unique neural cell type.
(Source: nyu.edu)
How nerve cells within the brain communicate with each other over long distances has puzzled scientists for decades. The way networks of neurons connect and how individual cells react to incoming pulses in principle makes communication over large distances impossible. Scientists from Germany and France provide now a possible answer how the brain can function nonetheless: by exploiting the powers of resonance.

(Image caption: Resonance in the activity of nerve cells (left) allows activity within the brain to travel over large distances, e.g. from the back of the head to the front during the processing of visual stimuli. Credit: Gunnar Grah/BrainLinks-BrainTools)
As Gerald Hahn, Alejandro F. Bujan and colleagues describe in the journal “PLoS Computational Biology”, the ability of networks of neurons to resonate can amplify oscillations in the activity of nerve cells, allowing signals to travel much farther than in the absence of resonance. The team from the cluster of excellence BrainLinks-BrainTools and the Bernstein Center at the University of Freiburg and the UNIC department of the French Centre national de la recherche scientifique in Gif-sur-Yvette created a computer model of networks of nerve cells and analyzed its properties for signal propagation.
Earlier propositions how information travels through the brain had the flaw of being biologically implausible. They either postulated strong connections between distant brain areas for which there was no evidence, or they required a global mechanism setting these distant parts of the brain into linked oscillations. However, nobody could explain how this could actually be implemented.
The simulation study of Hahn and Bujan required neither unrealistic network properties nor the existence of a pacemaker for the brain. Instead, they found that resonance could be the key to long-distance communication in networks with relatively few and weak connections, as it is the case in the brain. Not all nerve cells excite other cells; some inhibit the activity of others. This means that the activity in a network can oscillate around a certain level of activity as a result of the interplay of excitation and inhibition. These networks typically have preferred frequencies at which oscillations are particularly strong, just as a taut string on a violin has a preferred frequency. If the activity tunes into this frequency, pulses propagate much farther. As the scientists point out, the combination of oscillatory signals together with resonance induced amplification may be the only possible form of long distance communication in certain cases. They further suggest that a network’s ability to change its preferred frequency may play a role in the way how information is at times processed differently in the brain.
(Source: pr.uni-freiburg.de)