Neuroscience

Articles and news from the latest research reports.

Posts tagged science

200 notes

Off with Your Glasses

TAU researchers discover a link between sharp vision and the brain’s processing speed

image

Middle-aged adults who suddenly need reading glasses, patients with traumatic brain injuries, and people with visual disorders such as “lazy eye” may have one thing in common — “visual crowding,” an inability to recognize individual items surrounded by multiple objects. Visual crowding makes it impossible to read, as single letters within words are rendered illegible. And basic cognitive functions such as facial recognition can also be significantly hampered. Scientists and clinicians currently attribute crowding to a disorder in peripheral vision.

Now Prof. Uri Polat, Maria Lev, and Dr. Oren Yehezkel of Tel Aviv University’s Goldschleger Eye Research Instituteat the Sackler Faculty of Medicine have discovered new evidence that correlates crowding in the fovea — a small part of the retina responsible for sharp vision — and the brain’s processing speed. These findings, published in Nature’s Scientific Reports, could greatly alter earlier models of visual crowding, which emphasized peripheral impairment exclusively. And for many adults lost without their reading glasses, this could improve their vision significantly.

"Current theories strongly stress that visual crowding does not exist in the fovea, that it’s a phenomenon that exists only in peripheral visual fields," said Prof. Polat. "But our study points to another part of the eye altogether — the fovea — and contributes to a unified model for how the brain integrates visual information."

A trained eye

According to Prof. Polat, vision is dynamic and changes rapidly, but it takes time for the brain to process this visual information. Rapidly moving tickers on TV, or traffic signs seen as the driver speeds past, are difficult for anyone to read. However, given enough time, someone with excellent vision can fully recognize the words. Those with slower processing speeds — usually the result of poor perceptive development or age — may not be able to decipher the tickers or the traffic signs. In the study, Prof. Polat employed his expertise in improving vision by retraining the brain and the foveal part of the eye, using exercises in which speed is a key element.

"Training adults to reduce foveal crowding leads to improved vision. A similar training we conducted two years ago allowed adults to eliminate their use of reading glasses altogether, using a technology provided by the GlassesOff company. Other patients who had lost sharp vision for whatever reason were also able to benefit from the same training and improve their processing speed and visual capabilities," said Prof. Polat.

Maria Lev, who performed the study as a part of her doctoral thesis, said one young subject had experienced significant limitations in school for years and had been unable to obtain a driver’s license due to severe visual impairment from foveal crowding. After undergoing training that emphasized a foveal rather than a peripheral focus, he was able to overcome the handicap.

"He finally managed to learn to read properly and found his way forward," said Lev. "I’m proud to say that today he is not only eligible for a driver’s license, he’s also been able to earn his master’s degree."

Prof. Polat and his team are currently exploring how visual integration and foveal crowding develop in various clinical cases.

(Source: aftau.org)

Filed under vision visual crowding foveal crowding fovea neuroscience science

85 notes

Yeast model reveals Alzheimer’s drug candidate and its mechanism of action
Using a yeast model of Alzheimer’s disease (AD), Whitehead Institute researchers have identified a drug that reduces levels of the toxic protein fragment amyloid-β (Aβ) and prevents at least some of the cellular damage caused when Ab accumulates in the brains of AD patients.
“We can use this yeast model to find small molecules that will address the underlying cellular pathologies of Alzheimer’s, an age-related disease whose burden will become even more significant as our population grows older,” says Kent Matlack, a former staff scientist in Whitehead Member Susan Lindquist’s lab. “We need a no-holds-barred approach to find effective compounds, and we need information about their mechanism of action quickly. Our work demonstrates that using a yeast model of Ab toxicity is a valid way to do this.”
The U.S. National Institute on Aging estimates that 5.1 million Americans may have AD, the most common form of dementia, which progressively robs patients of their memories, thinking, and reasoning skills. Research focused on the disease has been hampered by the affected cells’ location in the brain, where they cannot be studied until after an AD patient’s death. To explore the cellular processes compromised by AD, researchers in Lindquist’s lab created a yeast model, first described in the journal Science in 2011, that mimics in vivo the accumulation of Aβ that occurs in the human disease.
In the current research, which is described in this week’s issue of the journal Proceedings of the National Academy of Sciences (PNAS), a team of scientists in Lindquist’s lab used the yeast model to screen approximately 140,000 compounds to identify those capable of rescuing the cells from Aβ toxicity. One of the more promising classes of compounds has previously shown efficacy in animal models of AD and is about to complete a second phase II trial for AD. The mechanism by which the best-studied member of this class, clioquinol, targets Ab within the cell – where a large portion of it is produced in neurons – was unclear.  
“Our work in the yeast model shows that clioquinol decreases the amount of Aβ in the cells by 90%,” says Daniel Tardiff, a scientist in Lindquist’s lab. “That’s a strong decrease, and it’s dose-dependent. I’ve tested a lot of compounds before, and I’ve never seen anything as dramatic.”
Clioquinol chelates copper, meaning that it selectively binds the metal. In many AD patients, Aβ aggregates have higher concentrations of copper and other metals than normal, healthy brain tissue. Biochemical experiments also show that copper makes Aβ more toxic.
With clioquinol’s chelation capabilities in mind, Tardiff and Matlack, co-authors of the PNAS paper, tested clioquinol’s effect on Aβ-expressing cells in the presence of copper. The drug dramatically increased the degradation of Aβ in a copper-dependent manner, and even restored the cellular protein-trafficking process known as endocytosis, which is disrupted in both the yeast model and in AD-affected neurons.
“The clioquinol probably has a slightly higher affinity for copper than Aβ does, but it is not a strong enough chelator to strip the cell’s normal metalloproteins of the copper they need,” says Matlack. “From what we’ve seen in the yeast model, we think the drug pulls the copper away from Aβ. That would alter Aβ’s structure and likely make it more susceptible to degradation, thus shortening its half-life in the cell.”
The results from clioquinol in yeast and the clinical potential of closely related compounds are promising. While these compounds are not yet ready to serve as AD drugs in the clinic, the identification of an AD-relevant compound and cellular pathology – along with the Lindquist lab’s previous identification of human AD risk alleles that reduce Ab toxicity in yeast – suggests that this discovery platform will continue to yield information and lead to more compounds with equal or greater effectiveness, some of which will hopefully make a difference in human disease.
“It is important to remember that this class of compounds was shown to work in mouse models and in a limited human trial,” says Lindqust, who is also a professor of biology at MIT and an investigator of the Howard Hughes Medical Institute. “We have validated the yeast model and shown that we can find such compounds at a speed that was inconceivable before—indeed we found some compounds that look even more effective.”

Yeast model reveals Alzheimer’s drug candidate and its mechanism of action

Using a yeast model of Alzheimer’s disease (AD), Whitehead Institute researchers have identified a drug that reduces levels of the toxic protein fragment amyloid-β (Aβ) and prevents at least some of the cellular damage caused when Ab accumulates in the brains of AD patients.

“We can use this yeast model to find small molecules that will address the underlying cellular pathologies of Alzheimer’s, an age-related disease whose burden will become even more significant as our population grows older,” says Kent Matlack, a former staff scientist in Whitehead Member Susan Lindquist’s lab. “We need a no-holds-barred approach to find effective compounds, and we need information about their mechanism of action quickly. Our work demonstrates that using a yeast model of Ab toxicity is a valid way to do this.”

The U.S. National Institute on Aging estimates that 5.1 million Americans may have AD, the most common form of dementia, which progressively robs patients of their memories, thinking, and reasoning skills. Research focused on the disease has been hampered by the affected cells’ location in the brain, where they cannot be studied until after an AD patient’s death. To explore the cellular processes compromised by AD, researchers in Lindquist’s lab created a yeast model, first described in the journal Science in 2011, that mimics in vivo the accumulation of Aβ that occurs in the human disease.

In the current research, which is described in this week’s issue of the journal Proceedings of the National Academy of Sciences (PNAS), a team of scientists in Lindquist’s lab used the yeast model to screen approximately 140,000 compounds to identify those capable of rescuing the cells from Aβ toxicity. One of the more promising classes of compounds has previously shown efficacy in animal models of AD and is about to complete a second phase II trial for AD. The mechanism by which the best-studied member of this class, clioquinol, targets Ab within the cell – where a large portion of it is produced in neurons – was unclear.  

“Our work in the yeast model shows that clioquinol decreases the amount of Aβ in the cells by 90%,” says Daniel Tardiff, a scientist in Lindquist’s lab. “That’s a strong decrease, and it’s dose-dependent. I’ve tested a lot of compounds before, and I’ve never seen anything as dramatic.”

Clioquinol chelates copper, meaning that it selectively binds the metal. In many AD patients, Aβ aggregates have higher concentrations of copper and other metals than normal, healthy brain tissue. Biochemical experiments also show that copper makes Aβ more toxic.

With clioquinol’s chelation capabilities in mind, Tardiff and Matlack, co-authors of the PNAS paper, tested clioquinol’s effect on Aβ-expressing cells in the presence of copper. The drug dramatically increased the degradation of Aβ in a copper-dependent manner, and even restored the cellular protein-trafficking process known as endocytosis, which is disrupted in both the yeast model and in AD-affected neurons.

“The clioquinol probably has a slightly higher affinity for copper than Aβ does, but it is not a strong enough chelator to strip the cell’s normal metalloproteins of the copper they need,” says Matlack. “From what we’ve seen in the yeast model, we think the drug pulls the copper away from Aβ. That would alter Aβ’s structure and likely make it more susceptible to degradation, thus shortening its half-life in the cell.”

The results from clioquinol in yeast and the clinical potential of closely related compounds are promising. While these compounds are not yet ready to serve as AD drugs in the clinic, the identification of an AD-relevant compound and cellular pathology – along with the Lindquist lab’s previous identification of human AD risk alleles that reduce Ab toxicity in yeast – suggests that this discovery platform will continue to yield information and lead to more compounds with equal or greater effectiveness, some of which will hopefully make a difference in human disease.

“It is important to remember that this class of compounds was shown to work in mouse models and in a limited human trial,” says Lindqust, who is also a professor of biology at MIT and an investigator of the Howard Hughes Medical Institute. “We have validated the yeast model and shown that we can find such compounds at a speed that was inconceivable before—indeed we found some compounds that look even more effective.”

Filed under alzheimer's disease yeast model endocytosis beta amyloid Ab toxicity neuroscience science

74 notes

Experimental stroke drug also shows promise for people with Lou Gehrig’s disease
Keck School of Medicine of USC neuroscientists have unlocked a piece of the puzzle in the fight against Lou Gehrig’s disease, a debilitating neurological disorder that robs people of their motor skills. Their findings appear in the March 3, 2014, online edition of the Proceedings of the National Academy of Sciences of the United States of America, the official scientific journal of the U.S. National Academy of Sciences.
"We know that both people and transgenic rodents afflicted with this disease develop spontaneous breakdown of the blood-spinal cord barrier, but how these microscopic lesions affect the development of the disease has been unclear," said Berislav V. Zlokovic, M.D., Ph.D., the study’s principal investigator and director of the Zilkha Neurogenetic Institute at USC. "In this study, we show that early motor neuron dysfunction related to the disease in mice is proportional to the degree of damage to the blood-spinal cord barrier and that restoring the integrity of the barrier delays motor neuron degeneration. We are hopeful that we can apply these findings to the corresponding disease mechanism in people. "
In this study, Zlokovic and colleagues found that an experimental drug now being studied in human stroke patients appears to protect the blood-spinal cord barrier’s integrity in mice and delay motor neuron impairment and degeneration. The drug, an activated protein C analog called 3K3A-APC, was developed by Zlokovic’s start-up biotechnology company, ZZ Biotech.
Lou Gehrig’s disease, also called amyotrophic lateral sclerosis, or ALS, attacks motor neurons, which are cells that control the muscles. The progressive degeneration of the motor neurons in ALS eventually leads to paralysis and difficulty breathing, eating and swallowing.
According to The ALS Association, approximately 15 people in the United States are diagnosed with ALS every day. It is estimated that as many as 30,000 Americans live with the disease. Most people who develop ALS are between the ages of 40 and 70, with an average age of 55 upon diagnosis. Life expectancy of an ALS patient averages about two to five years from the onset of symptoms.
ALS’s causes are not completely understood, and no cure has yet been found. Only one Food and Drug Administration-approved drug called riluzole has been shown to prolong life by two to three months. There are, however, devices and therapies that can manage the symptoms of the disease to help people maintain as much independence as possible and prolong survival.

Experimental stroke drug also shows promise for people with Lou Gehrig’s disease

Keck School of Medicine of USC neuroscientists have unlocked a piece of the puzzle in the fight against Lou Gehrig’s disease, a debilitating neurological disorder that robs people of their motor skills. Their findings appear in the March 3, 2014, online edition of the Proceedings of the National Academy of Sciences of the United States of America, the official scientific journal of the U.S. National Academy of Sciences.

"We know that both people and transgenic rodents afflicted with this disease develop spontaneous breakdown of the blood-spinal cord barrier, but how these microscopic lesions affect the development of the disease has been unclear," said Berislav V. Zlokovic, M.D., Ph.D., the study’s principal investigator and director of the Zilkha Neurogenetic Institute at USC. "In this study, we show that early motor neuron dysfunction related to the disease in mice is proportional to the degree of damage to the blood-spinal cord barrier and that restoring the integrity of the barrier delays motor neuron degeneration. We are hopeful that we can apply these findings to the corresponding disease mechanism in people. "

In this study, Zlokovic and colleagues found that an experimental drug now being studied in human stroke patients appears to protect the blood-spinal cord barrier’s integrity in mice and delay motor neuron impairment and degeneration. The drug, an activated protein C analog called 3K3A-APC, was developed by Zlokovic’s start-up biotechnology company, ZZ Biotech.

Lou Gehrig’s disease, also called amyotrophic lateral sclerosis, or ALS, attacks motor neurons, which are cells that control the muscles. The progressive degeneration of the motor neurons in ALS eventually leads to paralysis and difficulty breathing, eating and swallowing.

According to The ALS Association, approximately 15 people in the United States are diagnosed with ALS every day. It is estimated that as many as 30,000 Americans live with the disease. Most people who develop ALS are between the ages of 40 and 70, with an average age of 55 upon diagnosis. Life expectancy of an ALS patient averages about two to five years from the onset of symptoms.

ALS’s causes are not completely understood, and no cure has yet been found. Only one Food and Drug Administration-approved drug called riluzole has been shown to prolong life by two to three months. There are, however, devices and therapies that can manage the symptoms of the disease to help people maintain as much independence as possible and prolong survival.

Filed under ALS Lou Gehrig's disease motor neurons neurodegeneration medicine science

153 notes

Research reveals first glimpse of a brain circuit that helps experience to shape perception
Odors have a way of connecting us with moments buried deep in our past. Maybe it is a whiff of your grandmother’s perfume that transports you back decades. With that single breath, you are suddenly in her living room, listening as the adults banter about politics. The experiences that we accumulate throughout life build expectations that are associated with different scents. These expectations are known to influence how the brain uses and stores sensory information. But researchers have long wondered how the process works in reverse: how do our memories shape the way sensory information is collected?
In work published today in Nature Neuroscience, scientists from Cold Spring Harbor Laboratory (CSHL) demonstrate for the first time a way to observe this process in awake animals. The team, led by Assistant Professor Stephen Shea, was able to measure the activity of a group of inhibitory neurons that links the odor-sensing area of the brain with brain areas responsible for thought and cognition. This connection provides feedback so that memories and experiences can alter the way smells are interpreted. 
The inhibitory neurons that forget the link are known as granule cells. They are found in the core of the olfactory bulb, the area of the mouse brain responsible for receiving odor information from the nose. Granule cells in the olfactory bulb receive inputs from areas deep within the brain involved in memory formation and cognition. Despite their importance, it has been almost impossible to collect information about how granule cells function. They are extremely small and, in the past, scientists have only been able to measure their activity in anesthetized animals. But the animal must be awake and conscious in order to for experiences to alter sensory interpretation. Shea worked with lead authors on the study, Brittany Cazakoff, graduate student in CSHL’s Watson School of Biological Sciences, and Billy Lau, Ph.D., a postdoctoral fellow. They engineered a system to observe granule cells for the first time in awake animals. 
Granule cells relay the information they receive from neurons involved in memory and cognition back to the olfactory bulb. There, the granule cells inhibit the neurons that receive sensory inputs. In this way, “the granule cells provide a way for the brain to ‘talk’ to the sensory information as it comes in,” explains Shea. “You can think of these cells as conduits which allow experiences to shape incoming data.”
Why might an animal want to inhibit or block out specific parts of a stimulus, like an odor? Every scent is made up of hundreds of different chemicals, and “granule cells might help animals to emphasize the important components of complex mixtures,” says Shea. For example, an animal might have learned through experience to associate a particular scent, such as a predator’s urine, with danger. But each encounter with the smell is likely to be different. Maybe it is mixed with the smell of pine on one occasion and seawater on another. Granule cells provide the brain with an opportunity to filter away the less important odors and to focus sensory neurons only on the salient part of the stimulus. 
Now that it is possible to measure the activity of granule cells in awake animals, Shea and his team are eager to look at how sensory information changes when the expectations and memories associated with an odor change. “The interplay between a stimulus and our expectations is truly the merger of ourselves with the world. It exciting to see just how the brain mediates that interaction,” says Shea.

Research reveals first glimpse of a brain circuit that helps experience to shape perception

Odors have a way of connecting us with moments buried deep in our past. Maybe it is a whiff of your grandmother’s perfume that transports you back decades. With that single breath, you are suddenly in her living room, listening as the adults banter about politics. The experiences that we accumulate throughout life build expectations that are associated with different scents. These expectations are known to influence how the brain uses and stores sensory information. But researchers have long wondered how the process works in reverse: how do our memories shape the way sensory information is collected?

In work published today in Nature Neuroscience, scientists from Cold Spring Harbor Laboratory (CSHL) demonstrate for the first time a way to observe this process in awake animals. The team, led by Assistant Professor Stephen Shea, was able to measure the activity of a group of inhibitory neurons that links the odor-sensing area of the brain with brain areas responsible for thought and cognition. This connection provides feedback so that memories and experiences can alter the way smells are interpreted. 

The inhibitory neurons that forget the link are known as granule cells. They are found in the core of the olfactory bulb, the area of the mouse brain responsible for receiving odor information from the nose. Granule cells in the olfactory bulb receive inputs from areas deep within the brain involved in memory formation and cognition. Despite their importance, it has been almost impossible to collect information about how granule cells function. They are extremely small and, in the past, scientists have only been able to measure their activity in anesthetized animals. But the animal must be awake and conscious in order to for experiences to alter sensory interpretation. Shea worked with lead authors on the study, Brittany Cazakoff, graduate student in CSHL’s Watson School of Biological Sciences, and Billy Lau, Ph.D., a postdoctoral fellow. They engineered a system to observe granule cells for the first time in awake animals. 

Granule cells relay the information they receive from neurons involved in memory and cognition back to the olfactory bulb. There, the granule cells inhibit the neurons that receive sensory inputs. In this way, “the granule cells provide a way for the brain to ‘talk’ to the sensory information as it comes in,” explains Shea. “You can think of these cells as conduits which allow experiences to shape incoming data.”

Why might an animal want to inhibit or block out specific parts of a stimulus, like an odor? Every scent is made up of hundreds of different chemicals, and “granule cells might help animals to emphasize the important components of complex mixtures,” says Shea. For example, an animal might have learned through experience to associate a particular scent, such as a predator’s urine, with danger. But each encounter with the smell is likely to be different. Maybe it is mixed with the smell of pine on one occasion and seawater on another. Granule cells provide the brain with an opportunity to filter away the less important odors and to focus sensory neurons only on the salient part of the stimulus. 

Now that it is possible to measure the activity of granule cells in awake animals, Shea and his team are eager to look at how sensory information changes when the expectations and memories associated with an odor change. “The interplay between a stimulus and our expectations is truly the merger of ourselves with the world. It exciting to see just how the brain mediates that interaction,” says Shea.

Filed under olfactory bulb granule cells neurons memory neuroscience science

238 notes

A sparse memory is a precise memory
Particular smells can be incredibly evocative and bring back very clear, vivid memories.
Maybe you find the smell of freshly baked apple pie is forever associated with warm memories of grandma’s kitchen. Perhaps cut grass means long school holidays and endless football kickabouts. Or maybe catching the scent of certain medicines sees you revisit a bout of childhood illness.
What’s remarkable about the power of these ‘associative memories’ – connecting sensory information and past experiences – is just how precise they are. How do we and other animals attach distinct memories to the millions of possible smells we encounter?
There’s a clear advantage in doing so: accurately discriminating smells indicating dangers while making no mistakes in following those that are advantageous. But it’s a huge information processing challenge.
Researchers at Oxford University’s Centre for Neural Circuits and Behaviour have discovered that a key to forming distinct associative memories lies in how information from the senses is encoded in the brain.
Their study in fruit flies for the first time gives experimental confirmation of a theory put forward in the 1960s which suggested sensory information is encoded ‘sparsely’ in the brain.
The idea is that we have a huge population of nerve cells in many of our higher brain centres. But only a very few neurons fire in response to any particular sensation – be it smell, sound or vision. This would allow the brain to discriminate accurately between even very similar smells and sensations.
'This “sparse” coding means that neurons that respond to one odour don't overlap much with neurons that respond to other odours, which makes it easier for the brain to tell odours apart even if they are very similar,' explains Dr Andrew Lin, the lead author of the study published in Nature Neuroscience.
While previous studies have indicated that sensory information is encoded sparsely in the brain, there’s been no evidence that this arrangement is beneficial to storing distinct memories and acting on them.
'Sparse coding has been observed in the brains of other organisms, and there are compelling theoretical arguments for its importance,' says Professor Gero Miesenböck, in whose laboratory the research was performed. 'But until now it hasn’t been possible experimentally to link sparse coding with behaviour.'
In their new work, the researchers demonstrated that if they interfered with the sparse coding in fruit flies – if they ‘de-sparsened’ odour representations in the neurons that store associative memories – the flies lost the ability to form distinct memories for similar smells.
The flies are normally able to discriminate between two very similar odours, learning to avoid one and head for the other. This is controlled by the neurons that store associative memories, called Kenyon cells. There’s a separate nerve cell that acts as a control system to dampen down the activity the Kenyon cells, preventing too many of them from firing for any particular odour.
Dr Lin and colleagues showed that if this single nerve cell is blocked, the odour coding in Kenyon cells becomes less sparse and less able to discriminate between smells. The flies end up attaching the same memory to similar, yet different, odours.
Sparse coding does turn out to be important for sensory memories and our ability to act on them. Although the research was carried out in fruit flies, the scientists say sparse coding is likely to play a similar role in human memory.
Although sparse coding in the brain would seem to require much greater numbers of nerve cells, that cost appears to be worth it in being able to form distinct associative memories and act on them – thankfully. A life of experiences and memories is so much more full as a result.

A sparse memory is a precise memory

Particular smells can be incredibly evocative and bring back very clear, vivid memories.

Maybe you find the smell of freshly baked apple pie is forever associated with warm memories of grandma’s kitchen. Perhaps cut grass means long school holidays and endless football kickabouts. Or maybe catching the scent of certain medicines sees you revisit a bout of childhood illness.

What’s remarkable about the power of these ‘associative memories’ – connecting sensory information and past experiences – is just how precise they are. How do we and other animals attach distinct memories to the millions of possible smells we encounter?

There’s a clear advantage in doing so: accurately discriminating smells indicating dangers while making no mistakes in following those that are advantageous. But it’s a huge information processing challenge.

Researchers at Oxford University’s Centre for Neural Circuits and Behaviour have discovered that a key to forming distinct associative memories lies in how information from the senses is encoded in the brain.

Their study in fruit flies for the first time gives experimental confirmation of a theory put forward in the 1960s which suggested sensory information is encoded ‘sparsely’ in the brain.

The idea is that we have a huge population of nerve cells in many of our higher brain centres. But only a very few neurons fire in response to any particular sensation – be it smell, sound or vision. This would allow the brain to discriminate accurately between even very similar smells and sensations.

'This “sparse” coding means that neurons that respond to one odour don't overlap much with neurons that respond to other odours, which makes it easier for the brain to tell odours apart even if they are very similar,' explains Dr Andrew Lin, the lead author of the study published in Nature Neuroscience.

While previous studies have indicated that sensory information is encoded sparsely in the brain, there’s been no evidence that this arrangement is beneficial to storing distinct memories and acting on them.

'Sparse coding has been observed in the brains of other organisms, and there are compelling theoretical arguments for its importance,' says Professor Gero Miesenböck, in whose laboratory the research was performed. 'But until now it hasn’t been possible experimentally to link sparse coding with behaviour.'

In their new work, the researchers demonstrated that if they interfered with the sparse coding in fruit flies – if they ‘de-sparsened’ odour representations in the neurons that store associative memories – the flies lost the ability to form distinct memories for similar smells.

The flies are normally able to discriminate between two very similar odours, learning to avoid one and head for the other. This is controlled by the neurons that store associative memories, called Kenyon cells. There’s a separate nerve cell that acts as a control system to dampen down the activity the Kenyon cells, preventing too many of them from firing for any particular odour.

Dr Lin and colleagues showed that if this single nerve cell is blocked, the odour coding in Kenyon cells becomes less sparse and less able to discriminate between smells. The flies end up attaching the same memory to similar, yet different, odours.

Sparse coding does turn out to be important for sensory memories and our ability to act on them. Although the research was carried out in fruit flies, the scientists say sparse coding is likely to play a similar role in human memory.

Although sparse coding in the brain would seem to require much greater numbers of nerve cells, that cost appears to be worth it in being able to form distinct associative memories and act on them – thankfully. A life of experiences and memories is so much more full as a result.

Filed under memory sensory memories nerve cells sparse coding fruit flies neuroscience science

184 notes

Immune System Has Dramatic Impact on Children’s Brain Development

New research from the University of Virginia School of Medicine has revealed the dramatic effect the immune system has on the brain development of young children. The findings suggest new and better ways to prevent developmental impairment in children in developing countries, helping to free them from a cycle of poverty and disease, and to attain their full potential.

image

U.Va. researchers working in Bangladesh determined that the more days infants suffered fever, the worse they performed on developmental tests at 12 and 24 months. They also found that elevated levels of inflammation-causing proteins in the blood were associated with worse performance, while higher levels of inflammation-fighting proteins were associated with improved performance.

“The problem we sought to address was why millions of young children in low- and middle-income countries are not attaining their full developmental potential,” said lead author Nona Jiang, who performed the research while an undergraduate student in the laboratory of Dr. William Petri Jr. “Early childhood is an absolutely critical time of brain development, and it’s also a time when these children are suffering from recurrent infections. Therefore, we asked whether these infections are contributing to the impaired development we observe in children growing up in adversity.”

Their findings offer a potential explanation for the developmental impairment seen in children living in poverty. They also offer important direction for doctors attempting to combat the problem: By preventing inflammation, physicians may be able to enhance children’s mental ability for a lifetime.

“We are interested in examining factors that predict healthy child development around the world,” said researcher Dr. Rebecca Scharf of U.Va.’s Department of Pediatrics. “By studying which early childhood influences are associated with hindrances to growth and learning, we will know better where to target interventions for the critical period of early childhood.”

In addition, the finding illuminates the complex relationship between the immune system and cognitive development, an increasingly important area of research that U.Va. has helped pioneer.

“This is a very interesting study, showing, probably for the first time, the link between peripheral cytokine levels and improved cognitive development in humans,” said Jonathan Kipnis, a professor of neuroscience and director of U.Va.’s Center for Brain Immunology & Glia. “What is of the most interest and of a great novelty is the fact that [inflammation-fighting cytokines] have positive correlation with cognitive function. My lab published results showing that these IL-4 cytokines are required for proper brain function in mice, and this work from Dr. Petri’s lab completely independently shows similar correlation in humans.

“I hope the scientific community will appreciate how dramatic the effects of the immune system are on the central nervous system and will invest more efforts in studying and better understanding these complex and intriguing interactions between the body’s two major systems.”

(Source: news.virginia.edu)

Filed under brain development cytokines immune system nervous system neuroscience science

159 notes

Researchers reveal the dual role of brain glycogen
In 2007, in an article published in Nature Neuroscience, scientists at the Institute for Research in Biomedicine (IRB Barcelona) headed by Joan Guinovart, an authority on glycogen metabolism, suggested that in Lafora Disease (LD), a rare and fatal neurodegenerative condition that affects adolescents, neurons die as a result of the accumulation of glycogen—chains of glucose. They went on to propose that this accumulation is the root cause of this disease.
The breakthrough of this paper was two-sided: first, the researchers established a possible cause of LD and therefore were able to point to a plausible therapeutic target, and second, they discovered that neurons have the capacity to store glycogen—an observation that had never been made—and that this accumulation was toxic.
Other reports defended a different theory and upheld that the glycogen deposits were not cause by the neurodegeneration but were a consequence of another, more important, cell imbalance, such as a down deregulation of autophagy—the cell recycling and cleaning programme. In several articles, Guinovart’s “Metabolic engineering and diabetes therapy” group has recently brought to light evidence of the toxicity of glycogen deposits for LD patients, and has now provided irrefutable data.
In an article published at the beginning of February in Human Molecular Genetics, with the research associate Jordi Duran as first author, the scientists show that in LD the accumulation of glycogen directly causes neuronal death and triggers cell imbalances such a decrease in autophagy and synaptic failure. All these alterations lead to the symptoms of LD, such as epilepsy.
Glycogen, a Trojan horse for neurons?
There was still a greater mystery to be solved. Was glycogen synthase truly a Trojan horse for neurons, as apparently established in the article in Nature Neuroscience? That is to say, was the accumulation of glycogen always fatal for cells, thus explaining why their glycogen synthesis machinery is silenced? The inevitable question was then why these cells had such machinery.
In another paper published in Journal of Cerebral Blood Flow & Metabolism, part of the Nature Group, the researchers provided the first evidence that neurons constantly store glycogen but in a different way: accumulating small amounts and using it as quickly as it becomes available. In this regard, the scientists set up new, more sensitive, analytical techniques to confirm that the machinery responsible for glycogen synthesis and degradation existed. In summary, they showed that, in small amounts, glycogen is beneficial for neurons.
“For example, while the liver accumulates glycogen in large amounts and releases it slowly to maintain blood sugar levels, above all when we sleep, neurons synthesize and degrade small amounts of this polysaccharide continuously. They do not use it as an energy store but as a rapid and small, but constant, source of energy,” explains Guinovart, also senior professor at the University of Barcelona (UB).
To observe the action of glycogen, the scientists forced cultured mouse neurons to survive under oxygen depletion. They demonstrated that the first cells to die were those in which the capacity to synthesise glycogen had been removed. The same experiments were performed in collaboration with Marco Milán’s “Development and growth control” group in the in vivo model of the fruit fly Drosophila melanogaster. These tests led to the same conclusions.
The researchers postulated that glycogen is a lifeguard under oxygen depletion, a condition that leads the brains to shut down and that often occurs at birth and in cerebral infarctions in adults, which leads to severe consequences, such a cerebral paralysis.
“It is the first function of glycogen that we have discovered in neurons, but we still have to identify its function in normal conditions and establish how the mechanism works,” says Jordi Duran. Postdoctoral researcher Isabel Saez is the first author of the article out today, which involved the collaboration of ICREA Research Professor Marco Milán’s lab.
The beneficial and toxic roles of brain glycogen are currently the focus of main research lines conducted by Joan Guinovart’s lab.

Researchers reveal the dual role of brain glycogen

In 2007, in an article published in Nature Neuroscience, scientists at the Institute for Research in Biomedicine (IRB Barcelona) headed by Joan Guinovart, an authority on glycogen metabolism, suggested that in Lafora Disease (LD), a rare and fatal neurodegenerative condition that affects adolescents, neurons die as a result of the accumulation of glycogen—chains of glucose. They went on to propose that this accumulation is the root cause of this disease.

The breakthrough of this paper was two-sided: first, the researchers established a possible cause of LD and therefore were able to point to a plausible therapeutic target, and second, they discovered that neurons have the capacity to store glycogen—an observation that had never been made—and that this accumulation was toxic.

Other reports defended a different theory and upheld that the glycogen deposits were not cause by the neurodegeneration but were a consequence of another, more important, cell imbalance, such as a down deregulation of autophagy—the cell recycling and cleaning programme. In several articles, Guinovart’s “Metabolic engineering and diabetes therapy” group has recently brought to light evidence of the toxicity of glycogen deposits for LD patients, and has now provided irrefutable data.

In an article published at the beginning of February in Human Molecular Genetics, with the research associate Jordi Duran as first author, the scientists show that in LD the accumulation of glycogen directly causes neuronal death and triggers cell imbalances such a decrease in autophagy and synaptic failure. All these alterations lead to the symptoms of LD, such as epilepsy.

Glycogen, a Trojan horse for neurons?

There was still a greater mystery to be solved. Was glycogen synthase truly a Trojan horse for neurons, as apparently established in the article in Nature Neuroscience? That is to say, was the accumulation of glycogen always fatal for cells, thus explaining why their glycogen synthesis machinery is silenced? The inevitable question was then why these cells had such machinery.

In another paper published in Journal of Cerebral Blood Flow & Metabolism, part of the Nature Group, the researchers provided the first evidence that neurons constantly store glycogen but in a different way: accumulating small amounts and using it as quickly as it becomes available. In this regard, the scientists set up new, more sensitive, analytical techniques to confirm that the machinery responsible for glycogen synthesis and degradation existed. In summary, they showed that, in small amounts, glycogen is beneficial for neurons.

“For example, while the liver accumulates glycogen in large amounts and releases it slowly to maintain blood sugar levels, above all when we sleep, neurons synthesize and degrade small amounts of this polysaccharide continuously. They do not use it as an energy store but as a rapid and small, but constant, source of energy,” explains Guinovart, also senior professor at the University of Barcelona (UB).

To observe the action of glycogen, the scientists forced cultured mouse neurons to survive under oxygen depletion. They demonstrated that the first cells to die were those in which the capacity to synthesise glycogen had been removed. The same experiments were performed in collaboration with Marco Milán’s “Development and growth control” group in the in vivo model of the fruit fly Drosophila melanogaster. These tests led to the same conclusions.

The researchers postulated that glycogen is a lifeguard under oxygen depletion, a condition that leads the brains to shut down and that often occurs at birth and in cerebral infarctions in adults, which leads to severe consequences, such a cerebral paralysis.

“It is the first function of glycogen that we have discovered in neurons, but we still have to identify its function in normal conditions and establish how the mechanism works,” says Jordi Duran. Postdoctoral researcher Isabel Saez is the first author of the article out today, which involved the collaboration of ICREA Research Professor Marco Milán’s lab.

The beneficial and toxic roles of brain glycogen are currently the focus of main research lines conducted by Joan Guinovart’s lab.

Filed under glycogen lafora disease neurons neurodegeneration autophagy epilepsy neuroscience science

82 notes

3-D imaging sheds light on Apert Syndrome development
Three-dimensional imaging of two different mouse models of Apert Syndrome shows that cranial deformation begins before birth and continues, worsening with time, according to a team of researchers who studied mice to better understand and treat the disorder in humans.
Apert Syndrome is caused by mutations in FGFR2 — fibroblast growth factor receptor 2 — a gene, which usually produces a protein that functions in cell division, regulation of cell growth and maturation, formation of blood vessels, wound healing, and embryonic development. With certain mutations, this gene causes the bones in the skull to fuse together early, beginning in the fetus. These mutations also cause mid-facial deformation, a variety of neural, limb and tissue malformations and may lead to cognitive impairment.
Understanding the growth pattern of the head in an individual, the ability to anticipate where the bones will fuse and grow next, and using simulations “could contribute to improved patient-centered outcomes either through changes in surgical approach, or through more realistic modeling and expectation of surgical outcome,” the researchers said in today’s (Feb. 28) issue of BMC Developmental Biology.
Joan T. Richtsmeier, Distinguished Professor of Anthropology, Penn State, and her team looked at two sets of mice, each having a different mutation that causes Apert Syndrome in humans and causes similar cranial problems in the mice. They checked bone formation and the fusing of sutures, soft tissue that usually exists between bones n the skull, in the mice at 17.5 days after conception and at birth — 19 to 21 days after conception.
"It would be difficult, actually impossible, to observe and score the exact processes and timing of abnormal suture closure in humans as the disease is usually diagnosed after suture closure has occurred," said Richtsmeier. "With these mice, we can do this at the anatomical level by visualizing the sutures prenatally using micro-computed tomography — 3-D X-rays — or at the mechanistic level by using immunohistochemistry, or other approaches to see what the cells are doing as the sutures close."
The researchers found that both sets of mice differed in cranial formation from their littermates that were not carrying the mutation and that they differed from each other. They also found that the changes in suture closure in the head progressed from 17.5 days to birth, so that the heads of newborn mice looked very different at birth than they did when first imaged prenatally.
Apert syndrome also causes early closure of the sutures between bones in the face. Early fusion of bones of the skull and of the face makes it impossible for the head to grow in the typical fashion. The researchers found that the changed growth pattern contributes significantly to continuing skull deformation and facial deformation that is initiated prenatally and increases over time.
"Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood and into adulthood," said Richtsmeier. "These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features."
Using 3-D imaging, the researchers were able to estimate how the changes in the growth patterns caused by either of the two different mutations produced the head and facial deformities.
"If what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary is also sufficient," said Richtsmeier. "If it is not in at least some cases, then we need to be working towards therapies that can replace or further improve surgical outcomes."

3-D imaging sheds light on Apert Syndrome development

Three-dimensional imaging of two different mouse models of Apert Syndrome shows that cranial deformation begins before birth and continues, worsening with time, according to a team of researchers who studied mice to better understand and treat the disorder in humans.

Apert Syndrome is caused by mutations in FGFR2 — fibroblast growth factor receptor 2 — a gene, which usually produces a protein that functions in cell division, regulation of cell growth and maturation, formation of blood vessels, wound healing, and embryonic development. With certain mutations, this gene causes the bones in the skull to fuse together early, beginning in the fetus. These mutations also cause mid-facial deformation, a variety of neural, limb and tissue malformations and may lead to cognitive impairment.

Understanding the growth pattern of the head in an individual, the ability to anticipate where the bones will fuse and grow next, and using simulations “could contribute to improved patient-centered outcomes either through changes in surgical approach, or through more realistic modeling and expectation of surgical outcome,” the researchers said in today’s (Feb. 28) issue of BMC Developmental Biology.

Joan T. Richtsmeier, Distinguished Professor of Anthropology, Penn State, and her team looked at two sets of mice, each having a different mutation that causes Apert Syndrome in humans and causes similar cranial problems in the mice. They checked bone formation and the fusing of sutures, soft tissue that usually exists between bones n the skull, in the mice at 17.5 days after conception and at birth — 19 to 21 days after conception.

"It would be difficult, actually impossible, to observe and score the exact processes and timing of abnormal suture closure in humans as the disease is usually diagnosed after suture closure has occurred," said Richtsmeier. "With these mice, we can do this at the anatomical level by visualizing the sutures prenatally using micro-computed tomography — 3-D X-rays — or at the mechanistic level by using immunohistochemistry, or other approaches to see what the cells are doing as the sutures close."

The researchers found that both sets of mice differed in cranial formation from their littermates that were not carrying the mutation and that they differed from each other. They also found that the changes in suture closure in the head progressed from 17.5 days to birth, so that the heads of newborn mice looked very different at birth than they did when first imaged prenatally.

Apert syndrome also causes early closure of the sutures between bones in the face. Early fusion of bones of the skull and of the face makes it impossible for the head to grow in the typical fashion. The researchers found that the changed growth pattern contributes significantly to continuing skull deformation and facial deformation that is initiated prenatally and increases over time.

"Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood and into adulthood," said Richtsmeier. "These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features."

Using 3-D imaging, the researchers were able to estimate how the changes in the growth patterns caused by either of the two different mutations produced the head and facial deformities.

"If what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary is also sufficient," said Richtsmeier. "If it is not in at least some cases, then we need to be working towards therapies that can replace or further improve surgical outcomes."

Filed under apert syndrome cranial deformation 3d imaging FGFR2 genetic mutation neuroscience science

254 notes

Muscle-controlling Neurons Know When They Mess Up 
Whether it is playing a piano sonata or acing a tennis serve, the brain needs to orchestrate precise, coordinated control over the body’s many muscles. Moreover, there needs to be some kind of feedback from the senses should any of those movements go wrong. Neurons that coordinate those movements, known as Purkinje cells, and ones that provide feedback when there is an error or unexpected sensation, known as climbing fibers, work in close concert to fine-tune motor control.   
A team of researchers from the University of Pennsylvania and Princeton University has now begun to unravel the decades-spanning paradox concerning how this feedback system works.
At the heart of this puzzle is the fact that while climbing fibers send signals to Purkinje cells when there is an error to report, they also fire spontaneously, about once a second. There did not seem to be any mechanism by which individual Purkinje cells could detect a legitimate error signal from within this deafening noise of random firing. 
Using a microscopy technique that allowed the researchers to directly visualize the chemical signaling occurring between the climbing fibers and Purkinje cells of live, active mice, the Penn team has for the first time shown that there is a measurable difference between “true” and “false” signals.
This knowledge will be fundamental to future studies of fine motor control, particularly with regards to how movements can be improved with practice. 
The research was conducted by Javier Medina, assistant professor in the Department of Psychology in Penn’s School of Arts and Sciences, and Farzaneh Najafi, a graduate student in the Department of Biology. They collaborated with postdoctoral fellow Andrea Giovannucci and associate professor Samuel S. H. Wang of Princeton University.
It was published in the journal Cell Reports.
The cerebellum is one of the brain’s motor control centers. It contains thousands of Purkinje cells, each of which collects information from elsewhere in the brain and funnels it down to the muscle-triggering motor neurons. Each Purkinje cell receives messages from a climbing fiber, a type of neuron that extends from the brain stem and sends feedback about the associated muscles. 
“Climbing fibers are not just sensory neurons, however,” Medina said. “What makes climbing fibers interesting is that they don’t just say, ‘Something touched my face’; They say, ‘Something touched my face when I wasn’t expecting it.’ This is something that our brains do all the time, which explains why you can’t tickle yourself. There’s part of your brain that’s already expecting the sensation that will come from moving your fingers. But if someone else does it, the brain can’t predict it in the same way and it is that unexpectedness that leads to the tickling sensation.”
Not only does the climbing fiber feedback system for unexpected sensations serve as an alert to potential danger — unstable footing, an unseen predator brushing by — it helps the brain improve when an intended action doesn’t go as planned.    
“The sensation of muscles that don’t move in the way the Purkinje cells direct them to also counts as unexpected, which is why some people call climbing fibers ‘error cells,’” Medina said. “When you mess up your tennis swing, they’re saying to the Purkinje cells, ‘Stop! Change! What you’re doing is not right!’ That’s where they help you learn how to correct your movements.
“When the Purkinje cells get these signals from climbing fibers, they change by adding or tweaking the strength of the connections coming in from the rest of the brain to their dendrites. And because the Purkinje cells are so closely connected to the motor neurons, the changes to those synapses are going to result in changes to the movements that Purkinje cell controls.”
This is a phenomenon known as neuroplasticity, and it is fundamental for learning new behaviors or improving on them. That new neural pathways form in response to error signals from the climbing fibers allows the cerebellum to send better instructions to motor neurons the next time the same action is attempted.
The paradox that faced neuroscientists was that these climbing fibers, like many other neurons, are spontaneously activated. About once every second, they send a signal to their corresponding Purkinje cell, whether or not there were any unexpected stimuli or errors to report.
“So if you’re the Purkinje cell,” Medina said, “how are you ever going to tell the difference between signals that are spontaneous, meaning you don’t need to change anything, and ones that really need to be paid attention to?”
Medina and his colleagues devised an experiment to test whether there was a measurable difference between legitimate and spontaneous signals from the climbing fibers. In their study, the researchers had mice walk on treadmills while their heads were kept stationary. This allowed the researchers to blow random puffs of air at their faces, causing them to blink, and to use a non-invasive microscopy technique to look at how the relevant Purkinje cells respond.
The technique, two-photon microscopy, uses an infrared laser and a reflective dye to look deep into living tissue, providing information on both structure and chemical composition. Neural signals are transmitted within neurons by changing calcium concentrations, so the researchers used this technique to measure the amount of calcium contained within the Purkinje cells in real time.
Because the random puffs of air were unexpected stimuli for the mice, the researchers could directly compare the differences between legitimate and spontaneous signals in the eyelid-related Purkinje cells that made the mice blink.
“What we have found is that the Purkinje cell fills with more calcium when its corresponding climbing fiber sends a signal associated with that kind of sensory input, rather than a spontaneous one,” Medina said. “This was a bit of a surprise for us because climbing fibers had been thought of as ‘all or nothing’ for more than 50 years now.”
The mechanism that allows individual Purkinje cells to differentiate between the two kinds of climbing fiber signals is an open question. These signals come in bursts, so the number and spacing of the electrical impulses from climbing fiber to Purkinje cell might be significant. Medina and his colleagues also suspect that another mechanism is at play: Purkinje cells might respond differently when a signal from a climbing fiber is synchronized with signals coming elsewhere from the brain.   
Whether either or both of these explanations are confirmed, the fact that individual Purkinje cells are able to distinguish when their corresponding muscle neurons encounter an error must be taken into account in future studies of fine motor control. This understanding could lead to new research into the fundamentals of neuroplasticity and learning.    
“Something that would be very useful for the brain is to have information not just about whether there was an error but how big the error was — whether the Purkinje cell needs to make a minor or major adjustment,” Medina said. “That sort of information would seem to be necessary for us to get very good at any kind of activity that requires precise control. Perhaps climbing fiber signals are not as ‘all-or-nothing’ as we all thought and can provide that sort of graded information”

Muscle-controlling Neurons Know When They Mess Up

Whether it is playing a piano sonata or acing a tennis serve, the brain needs to orchestrate precise, coordinated control over the body’s many muscles. Moreover, there needs to be some kind of feedback from the senses should any of those movements go wrong. Neurons that coordinate those movements, known as Purkinje cells, and ones that provide feedback when there is an error or unexpected sensation, known as climbing fibers, work in close concert to fine-tune motor control.   

A team of researchers from the University of Pennsylvania and Princeton University has now begun to unravel the decades-spanning paradox concerning how this feedback system works.

At the heart of this puzzle is the fact that while climbing fibers send signals to Purkinje cells when there is an error to report, they also fire spontaneously, about once a second. There did not seem to be any mechanism by which individual Purkinje cells could detect a legitimate error signal from within this deafening noise of random firing. 

Using a microscopy technique that allowed the researchers to directly visualize the chemical signaling occurring between the climbing fibers and Purkinje cells of live, active mice, the Penn team has for the first time shown that there is a measurable difference between “true” and “false” signals.

This knowledge will be fundamental to future studies of fine motor control, particularly with regards to how movements can be improved with practice. 

The research was conducted by Javier Medina, assistant professor in the Department of Psychology in Penn’s School of Arts and Sciences, and Farzaneh Najafi, a graduate student in the Department of Biology. They collaborated with postdoctoral fellow Andrea Giovannucci and associate professor Samuel S. H. Wang of Princeton University.

It was published in the journal Cell Reports.

The cerebellum is one of the brain’s motor control centers. It contains thousands of Purkinje cells, each of which collects information from elsewhere in the brain and funnels it down to the muscle-triggering motor neurons. Each Purkinje cell receives messages from a climbing fiber, a type of neuron that extends from the brain stem and sends feedback about the associated muscles. 

“Climbing fibers are not just sensory neurons, however,” Medina said. “What makes climbing fibers interesting is that they don’t just say, ‘Something touched my face’; They say, ‘Something touched my face when I wasn’t expecting it.’ This is something that our brains do all the time, which explains why you can’t tickle yourself. There’s part of your brain that’s already expecting the sensation that will come from moving your fingers. But if someone else does it, the brain can’t predict it in the same way and it is that unexpectedness that leads to the tickling sensation.”

Not only does the climbing fiber feedback system for unexpected sensations serve as an alert to potential danger — unstable footing, an unseen predator brushing by — it helps the brain improve when an intended action doesn’t go as planned.    

“The sensation of muscles that don’t move in the way the Purkinje cells direct them to also counts as unexpected, which is why some people call climbing fibers ‘error cells,’” Medina said. “When you mess up your tennis swing, they’re saying to the Purkinje cells, ‘Stop! Change! What you’re doing is not right!’ That’s where they help you learn how to correct your movements.

“When the Purkinje cells get these signals from climbing fibers, they change by adding or tweaking the strength of the connections coming in from the rest of the brain to their dendrites. And because the Purkinje cells are so closely connected to the motor neurons, the changes to those synapses are going to result in changes to the movements that Purkinje cell controls.”

This is a phenomenon known as neuroplasticity, and it is fundamental for learning new behaviors or improving on them. That new neural pathways form in response to error signals from the climbing fibers allows the cerebellum to send better instructions to motor neurons the next time the same action is attempted.

The paradox that faced neuroscientists was that these climbing fibers, like many other neurons, are spontaneously activated. About once every second, they send a signal to their corresponding Purkinje cell, whether or not there were any unexpected stimuli or errors to report.

“So if you’re the Purkinje cell,” Medina said, “how are you ever going to tell the difference between signals that are spontaneous, meaning you don’t need to change anything, and ones that really need to be paid attention to?”

Medina and his colleagues devised an experiment to test whether there was a measurable difference between legitimate and spontaneous signals from the climbing fibers. In their study, the researchers had mice walk on treadmills while their heads were kept stationary. This allowed the researchers to blow random puffs of air at their faces, causing them to blink, and to use a non-invasive microscopy technique to look at how the relevant Purkinje cells respond.

The technique, two-photon microscopy, uses an infrared laser and a reflective dye to look deep into living tissue, providing information on both structure and chemical composition. Neural signals are transmitted within neurons by changing calcium concentrations, so the researchers used this technique to measure the amount of calcium contained within the Purkinje cells in real time.

Because the random puffs of air were unexpected stimuli for the mice, the researchers could directly compare the differences between legitimate and spontaneous signals in the eyelid-related Purkinje cells that made the mice blink.

“What we have found is that the Purkinje cell fills with more calcium when its corresponding climbing fiber sends a signal associated with that kind of sensory input, rather than a spontaneous one,” Medina said. “This was a bit of a surprise for us because climbing fibers had been thought of as ‘all or nothing’ for more than 50 years now.”

The mechanism that allows individual Purkinje cells to differentiate between the two kinds of climbing fiber signals is an open question. These signals come in bursts, so the number and spacing of the electrical impulses from climbing fiber to Purkinje cell might be significant. Medina and his colleagues also suspect that another mechanism is at play: Purkinje cells might respond differently when a signal from a climbing fiber is synchronized with signals coming elsewhere from the brain.   

Whether either or both of these explanations are confirmed, the fact that individual Purkinje cells are able to distinguish when their corresponding muscle neurons encounter an error must be taken into account in future studies of fine motor control. This understanding could lead to new research into the fundamentals of neuroplasticity and learning.    

“Something that would be very useful for the brain is to have information not just about whether there was an error but how big the error was — whether the Purkinje cell needs to make a minor or major adjustment,” Medina said. “That sort of information would seem to be necessary for us to get very good at any kind of activity that requires precise control. Perhaps climbing fiber signals are not as ‘all-or-nothing’ as we all thought and can provide that sort of graded information”

Filed under purkinje cells motor movement neuroplasticity cerebellum motor neurons neuroscience science

268 notes

Researchers Identify Brain Differences Linked to Insomnia

Johns Hopkins researchers report that people with chronic insomnia show more plasticity and activity than good sleepers in the part of the brain that controls movement.

"Insomnia is not a nighttime disorder," says study leader Rachel E. Salas, M.D., an assistant professor of neurology at the Johns Hopkins University School of Medicine. "It’s a 24-hour brain condition, like a light switch that is always on. Our research adds information about differences in the brain associated with it."

image

Salas and her team, reporting in the March issue of the journal Sleep, found that the motor cortex in those with chronic insomnia was more adaptable to change - more plastic - than in a group of good sleepers. They also found more “excitability” among neurons in the same region of the brain among those with chronic insomnia, adding evidence to the notion that insomniacs are in a constant state of heightened information processing that may interfere with sleep.

Researchers say they hope their study opens the door to better diagnosis and treatment of the most common and often intractable sleep disorder that affects an estimated 15 percent of the United States population.

To conduct the study, Salas and her colleagues from the Department of Psychiatry and Behavioral Sciences and the Department of Physical Medicine and Rehabilitation used transcranial magnetic stimulation (TMS), which painlessly and noninvasively delivers electromagnetic currents to precise locations in the brain and can temporarily and safely disrupt the function of the targeted area. TMS is approved by the U.S. Food and Drug Administration to treat some patients with depression by stimulating nerve cells in the region of the brain involved in mood control.

The study included 28 adult participants - 18 who suffered from insomnia for a year or more and 10 considered good sleepers with no reports of trouble sleeping. Each participant was outfitted with electrodes on their dominant thumb as well as an accelerometer to measure the speed and direction of the thumb.

The researchers then gave each subject 65 electrical pulses using TMS, stimulating areas of the motor cortex and watching for involuntary thumb movements linked to the stimulation. Subsequently, the researchers trained each participant for 30 minutes, teaching them to move their thumb in the opposite direction of the original involuntary movement. They then introduced the electrical pulses once again.

The idea was to measure the extent to which participants’ brains could learn to move their thumbs involuntarily in the newly trained direction. The more the thumb was able to move in the new direction, the more likely their motor cortexes could be identified as more plastic.

Because lack of sleep at night has been linked to decreased memory and concentration during the day, Salas and her colleagues suspected that the brains of good sleepers could be more easily retrained. The results, however, were the opposite. The researchers found much more plasticity in the brains of those with chronic insomnia.

Salas says the origins of increased plasticity in insomniacs are unclear, and it is not known whether the increase is the cause of insomnia. It is also unknown whether this increased plasticity is beneficial, the source of the problem or part of a compensatory mechanism to address the consequences of sleep deprivation associated with chronic insomnia. Patients with chronic phantom pain after limb amputation and with dystonia, a neurological movement disorder in which sustained muscle contractions cause twisting and repetitive movements, also have increased brain plasticity in the motor cortex, but to detrimental effect.

Salas says it is possible that the dysregulation of arousal described in chronic insomnia - increased metabolism, increased cortisol levels, constant worrying - might be linked to increased plasticity in some way. Diagnosing insomnia is solely based on what the patient reports to the provider; there is no objective test. Neither is there a single treatment that works for all people with insomnia. Treatment can be a hit or miss in many patients, Salas says.

She says this study shows that TMS may be able to play a role in diagnosing insomnia, and more importantly, she says, potentially prove to be a treatment for insomnia, perhaps through reducing excitability.

(Source: hopkinsmedicine.org)

Filed under insomnia plasticity motor cortex sleep transcranial magnetic stimulation neuroscience science

free counters