Neuroscience

Articles and news from the latest research reports.

74 notes

Experimental stroke drug also shows promise for people with Lou Gehrig’s disease
Keck School of Medicine of USC neuroscientists have unlocked a piece of the puzzle in the fight against Lou Gehrig’s disease, a debilitating neurological disorder that robs people of their motor skills. Their findings appear in the March 3, 2014, online edition of the Proceedings of the National Academy of Sciences of the United States of America, the official scientific journal of the U.S. National Academy of Sciences.
"We know that both people and transgenic rodents afflicted with this disease develop spontaneous breakdown of the blood-spinal cord barrier, but how these microscopic lesions affect the development of the disease has been unclear," said Berislav V. Zlokovic, M.D., Ph.D., the study’s principal investigator and director of the Zilkha Neurogenetic Institute at USC. "In this study, we show that early motor neuron dysfunction related to the disease in mice is proportional to the degree of damage to the blood-spinal cord barrier and that restoring the integrity of the barrier delays motor neuron degeneration. We are hopeful that we can apply these findings to the corresponding disease mechanism in people. "
In this study, Zlokovic and colleagues found that an experimental drug now being studied in human stroke patients appears to protect the blood-spinal cord barrier’s integrity in mice and delay motor neuron impairment and degeneration. The drug, an activated protein C analog called 3K3A-APC, was developed by Zlokovic’s start-up biotechnology company, ZZ Biotech.
Lou Gehrig’s disease, also called amyotrophic lateral sclerosis, or ALS, attacks motor neurons, which are cells that control the muscles. The progressive degeneration of the motor neurons in ALS eventually leads to paralysis and difficulty breathing, eating and swallowing.
According to The ALS Association, approximately 15 people in the United States are diagnosed with ALS every day. It is estimated that as many as 30,000 Americans live with the disease. Most people who develop ALS are between the ages of 40 and 70, with an average age of 55 upon diagnosis. Life expectancy of an ALS patient averages about two to five years from the onset of symptoms.
ALS’s causes are not completely understood, and no cure has yet been found. Only one Food and Drug Administration-approved drug called riluzole has been shown to prolong life by two to three months. There are, however, devices and therapies that can manage the symptoms of the disease to help people maintain as much independence as possible and prolong survival.

Experimental stroke drug also shows promise for people with Lou Gehrig’s disease

Keck School of Medicine of USC neuroscientists have unlocked a piece of the puzzle in the fight against Lou Gehrig’s disease, a debilitating neurological disorder that robs people of their motor skills. Their findings appear in the March 3, 2014, online edition of the Proceedings of the National Academy of Sciences of the United States of America, the official scientific journal of the U.S. National Academy of Sciences.

"We know that both people and transgenic rodents afflicted with this disease develop spontaneous breakdown of the blood-spinal cord barrier, but how these microscopic lesions affect the development of the disease has been unclear," said Berislav V. Zlokovic, M.D., Ph.D., the study’s principal investigator and director of the Zilkha Neurogenetic Institute at USC. "In this study, we show that early motor neuron dysfunction related to the disease in mice is proportional to the degree of damage to the blood-spinal cord barrier and that restoring the integrity of the barrier delays motor neuron degeneration. We are hopeful that we can apply these findings to the corresponding disease mechanism in people. "

In this study, Zlokovic and colleagues found that an experimental drug now being studied in human stroke patients appears to protect the blood-spinal cord barrier’s integrity in mice and delay motor neuron impairment and degeneration. The drug, an activated protein C analog called 3K3A-APC, was developed by Zlokovic’s start-up biotechnology company, ZZ Biotech.

Lou Gehrig’s disease, also called amyotrophic lateral sclerosis, or ALS, attacks motor neurons, which are cells that control the muscles. The progressive degeneration of the motor neurons in ALS eventually leads to paralysis and difficulty breathing, eating and swallowing.

According to The ALS Association, approximately 15 people in the United States are diagnosed with ALS every day. It is estimated that as many as 30,000 Americans live with the disease. Most people who develop ALS are between the ages of 40 and 70, with an average age of 55 upon diagnosis. Life expectancy of an ALS patient averages about two to five years from the onset of symptoms.

ALS’s causes are not completely understood, and no cure has yet been found. Only one Food and Drug Administration-approved drug called riluzole has been shown to prolong life by two to three months. There are, however, devices and therapies that can manage the symptoms of the disease to help people maintain as much independence as possible and prolong survival.

Filed under ALS Lou Gehrig's disease motor neurons neurodegeneration medicine science

153 notes

Research reveals first glimpse of a brain circuit that helps experience to shape perception
Odors have a way of connecting us with moments buried deep in our past. Maybe it is a whiff of your grandmother’s perfume that transports you back decades. With that single breath, you are suddenly in her living room, listening as the adults banter about politics. The experiences that we accumulate throughout life build expectations that are associated with different scents. These expectations are known to influence how the brain uses and stores sensory information. But researchers have long wondered how the process works in reverse: how do our memories shape the way sensory information is collected?
In work published today in Nature Neuroscience, scientists from Cold Spring Harbor Laboratory (CSHL) demonstrate for the first time a way to observe this process in awake animals. The team, led by Assistant Professor Stephen Shea, was able to measure the activity of a group of inhibitory neurons that links the odor-sensing area of the brain with brain areas responsible for thought and cognition. This connection provides feedback so that memories and experiences can alter the way smells are interpreted. 
The inhibitory neurons that forget the link are known as granule cells. They are found in the core of the olfactory bulb, the area of the mouse brain responsible for receiving odor information from the nose. Granule cells in the olfactory bulb receive inputs from areas deep within the brain involved in memory formation and cognition. Despite their importance, it has been almost impossible to collect information about how granule cells function. They are extremely small and, in the past, scientists have only been able to measure their activity in anesthetized animals. But the animal must be awake and conscious in order to for experiences to alter sensory interpretation. Shea worked with lead authors on the study, Brittany Cazakoff, graduate student in CSHL’s Watson School of Biological Sciences, and Billy Lau, Ph.D., a postdoctoral fellow. They engineered a system to observe granule cells for the first time in awake animals. 
Granule cells relay the information they receive from neurons involved in memory and cognition back to the olfactory bulb. There, the granule cells inhibit the neurons that receive sensory inputs. In this way, “the granule cells provide a way for the brain to ‘talk’ to the sensory information as it comes in,” explains Shea. “You can think of these cells as conduits which allow experiences to shape incoming data.”
Why might an animal want to inhibit or block out specific parts of a stimulus, like an odor? Every scent is made up of hundreds of different chemicals, and “granule cells might help animals to emphasize the important components of complex mixtures,” says Shea. For example, an animal might have learned through experience to associate a particular scent, such as a predator’s urine, with danger. But each encounter with the smell is likely to be different. Maybe it is mixed with the smell of pine on one occasion and seawater on another. Granule cells provide the brain with an opportunity to filter away the less important odors and to focus sensory neurons only on the salient part of the stimulus. 
Now that it is possible to measure the activity of granule cells in awake animals, Shea and his team are eager to look at how sensory information changes when the expectations and memories associated with an odor change. “The interplay between a stimulus and our expectations is truly the merger of ourselves with the world. It exciting to see just how the brain mediates that interaction,” says Shea.

Research reveals first glimpse of a brain circuit that helps experience to shape perception

Odors have a way of connecting us with moments buried deep in our past. Maybe it is a whiff of your grandmother’s perfume that transports you back decades. With that single breath, you are suddenly in her living room, listening as the adults banter about politics. The experiences that we accumulate throughout life build expectations that are associated with different scents. These expectations are known to influence how the brain uses and stores sensory information. But researchers have long wondered how the process works in reverse: how do our memories shape the way sensory information is collected?

In work published today in Nature Neuroscience, scientists from Cold Spring Harbor Laboratory (CSHL) demonstrate for the first time a way to observe this process in awake animals. The team, led by Assistant Professor Stephen Shea, was able to measure the activity of a group of inhibitory neurons that links the odor-sensing area of the brain with brain areas responsible for thought and cognition. This connection provides feedback so that memories and experiences can alter the way smells are interpreted. 

The inhibitory neurons that forget the link are known as granule cells. They are found in the core of the olfactory bulb, the area of the mouse brain responsible for receiving odor information from the nose. Granule cells in the olfactory bulb receive inputs from areas deep within the brain involved in memory formation and cognition. Despite their importance, it has been almost impossible to collect information about how granule cells function. They are extremely small and, in the past, scientists have only been able to measure their activity in anesthetized animals. But the animal must be awake and conscious in order to for experiences to alter sensory interpretation. Shea worked with lead authors on the study, Brittany Cazakoff, graduate student in CSHL’s Watson School of Biological Sciences, and Billy Lau, Ph.D., a postdoctoral fellow. They engineered a system to observe granule cells for the first time in awake animals. 

Granule cells relay the information they receive from neurons involved in memory and cognition back to the olfactory bulb. There, the granule cells inhibit the neurons that receive sensory inputs. In this way, “the granule cells provide a way for the brain to ‘talk’ to the sensory information as it comes in,” explains Shea. “You can think of these cells as conduits which allow experiences to shape incoming data.”

Why might an animal want to inhibit or block out specific parts of a stimulus, like an odor? Every scent is made up of hundreds of different chemicals, and “granule cells might help animals to emphasize the important components of complex mixtures,” says Shea. For example, an animal might have learned through experience to associate a particular scent, such as a predator’s urine, with danger. But each encounter with the smell is likely to be different. Maybe it is mixed with the smell of pine on one occasion and seawater on another. Granule cells provide the brain with an opportunity to filter away the less important odors and to focus sensory neurons only on the salient part of the stimulus. 

Now that it is possible to measure the activity of granule cells in awake animals, Shea and his team are eager to look at how sensory information changes when the expectations and memories associated with an odor change. “The interplay between a stimulus and our expectations is truly the merger of ourselves with the world. It exciting to see just how the brain mediates that interaction,” says Shea.

Filed under olfactory bulb granule cells neurons memory neuroscience science

238 notes

A sparse memory is a precise memory
Particular smells can be incredibly evocative and bring back very clear, vivid memories.
Maybe you find the smell of freshly baked apple pie is forever associated with warm memories of grandma’s kitchen. Perhaps cut grass means long school holidays and endless football kickabouts. Or maybe catching the scent of certain medicines sees you revisit a bout of childhood illness.
What’s remarkable about the power of these ‘associative memories’ – connecting sensory information and past experiences – is just how precise they are. How do we and other animals attach distinct memories to the millions of possible smells we encounter?
There’s a clear advantage in doing so: accurately discriminating smells indicating dangers while making no mistakes in following those that are advantageous. But it’s a huge information processing challenge.
Researchers at Oxford University’s Centre for Neural Circuits and Behaviour have discovered that a key to forming distinct associative memories lies in how information from the senses is encoded in the brain.
Their study in fruit flies for the first time gives experimental confirmation of a theory put forward in the 1960s which suggested sensory information is encoded ‘sparsely’ in the brain.
The idea is that we have a huge population of nerve cells in many of our higher brain centres. But only a very few neurons fire in response to any particular sensation – be it smell, sound or vision. This would allow the brain to discriminate accurately between even very similar smells and sensations.
'This “sparse” coding means that neurons that respond to one odour don't overlap much with neurons that respond to other odours, which makes it easier for the brain to tell odours apart even if they are very similar,' explains Dr Andrew Lin, the lead author of the study published in Nature Neuroscience.
While previous studies have indicated that sensory information is encoded sparsely in the brain, there’s been no evidence that this arrangement is beneficial to storing distinct memories and acting on them.
'Sparse coding has been observed in the brains of other organisms, and there are compelling theoretical arguments for its importance,' says Professor Gero Miesenböck, in whose laboratory the research was performed. 'But until now it hasn’t been possible experimentally to link sparse coding with behaviour.'
In their new work, the researchers demonstrated that if they interfered with the sparse coding in fruit flies – if they ‘de-sparsened’ odour representations in the neurons that store associative memories – the flies lost the ability to form distinct memories for similar smells.
The flies are normally able to discriminate between two very similar odours, learning to avoid one and head for the other. This is controlled by the neurons that store associative memories, called Kenyon cells. There’s a separate nerve cell that acts as a control system to dampen down the activity the Kenyon cells, preventing too many of them from firing for any particular odour.
Dr Lin and colleagues showed that if this single nerve cell is blocked, the odour coding in Kenyon cells becomes less sparse and less able to discriminate between smells. The flies end up attaching the same memory to similar, yet different, odours.
Sparse coding does turn out to be important for sensory memories and our ability to act on them. Although the research was carried out in fruit flies, the scientists say sparse coding is likely to play a similar role in human memory.
Although sparse coding in the brain would seem to require much greater numbers of nerve cells, that cost appears to be worth it in being able to form distinct associative memories and act on them – thankfully. A life of experiences and memories is so much more full as a result.

A sparse memory is a precise memory

Particular smells can be incredibly evocative and bring back very clear, vivid memories.

Maybe you find the smell of freshly baked apple pie is forever associated with warm memories of grandma’s kitchen. Perhaps cut grass means long school holidays and endless football kickabouts. Or maybe catching the scent of certain medicines sees you revisit a bout of childhood illness.

What’s remarkable about the power of these ‘associative memories’ – connecting sensory information and past experiences – is just how precise they are. How do we and other animals attach distinct memories to the millions of possible smells we encounter?

There’s a clear advantage in doing so: accurately discriminating smells indicating dangers while making no mistakes in following those that are advantageous. But it’s a huge information processing challenge.

Researchers at Oxford University’s Centre for Neural Circuits and Behaviour have discovered that a key to forming distinct associative memories lies in how information from the senses is encoded in the brain.

Their study in fruit flies for the first time gives experimental confirmation of a theory put forward in the 1960s which suggested sensory information is encoded ‘sparsely’ in the brain.

The idea is that we have a huge population of nerve cells in many of our higher brain centres. But only a very few neurons fire in response to any particular sensation – be it smell, sound or vision. This would allow the brain to discriminate accurately between even very similar smells and sensations.

'This “sparse” coding means that neurons that respond to one odour don't overlap much with neurons that respond to other odours, which makes it easier for the brain to tell odours apart even if they are very similar,' explains Dr Andrew Lin, the lead author of the study published in Nature Neuroscience.

While previous studies have indicated that sensory information is encoded sparsely in the brain, there’s been no evidence that this arrangement is beneficial to storing distinct memories and acting on them.

'Sparse coding has been observed in the brains of other organisms, and there are compelling theoretical arguments for its importance,' says Professor Gero Miesenböck, in whose laboratory the research was performed. 'But until now it hasn’t been possible experimentally to link sparse coding with behaviour.'

In their new work, the researchers demonstrated that if they interfered with the sparse coding in fruit flies – if they ‘de-sparsened’ odour representations in the neurons that store associative memories – the flies lost the ability to form distinct memories for similar smells.

The flies are normally able to discriminate between two very similar odours, learning to avoid one and head for the other. This is controlled by the neurons that store associative memories, called Kenyon cells. There’s a separate nerve cell that acts as a control system to dampen down the activity the Kenyon cells, preventing too many of them from firing for any particular odour.

Dr Lin and colleagues showed that if this single nerve cell is blocked, the odour coding in Kenyon cells becomes less sparse and less able to discriminate between smells. The flies end up attaching the same memory to similar, yet different, odours.

Sparse coding does turn out to be important for sensory memories and our ability to act on them. Although the research was carried out in fruit flies, the scientists say sparse coding is likely to play a similar role in human memory.

Although sparse coding in the brain would seem to require much greater numbers of nerve cells, that cost appears to be worth it in being able to form distinct associative memories and act on them – thankfully. A life of experiences and memories is so much more full as a result.

Filed under memory sensory memories nerve cells sparse coding fruit flies neuroscience science

184 notes

Immune System Has Dramatic Impact on Children’s Brain Development

New research from the University of Virginia School of Medicine has revealed the dramatic effect the immune system has on the brain development of young children. The findings suggest new and better ways to prevent developmental impairment in children in developing countries, helping to free them from a cycle of poverty and disease, and to attain their full potential.

image

U.Va. researchers working in Bangladesh determined that the more days infants suffered fever, the worse they performed on developmental tests at 12 and 24 months. They also found that elevated levels of inflammation-causing proteins in the blood were associated with worse performance, while higher levels of inflammation-fighting proteins were associated with improved performance.

“The problem we sought to address was why millions of young children in low- and middle-income countries are not attaining their full developmental potential,” said lead author Nona Jiang, who performed the research while an undergraduate student in the laboratory of Dr. William Petri Jr. “Early childhood is an absolutely critical time of brain development, and it’s also a time when these children are suffering from recurrent infections. Therefore, we asked whether these infections are contributing to the impaired development we observe in children growing up in adversity.”

Their findings offer a potential explanation for the developmental impairment seen in children living in poverty. They also offer important direction for doctors attempting to combat the problem: By preventing inflammation, physicians may be able to enhance children’s mental ability for a lifetime.

“We are interested in examining factors that predict healthy child development around the world,” said researcher Dr. Rebecca Scharf of U.Va.’s Department of Pediatrics. “By studying which early childhood influences are associated with hindrances to growth and learning, we will know better where to target interventions for the critical period of early childhood.”

In addition, the finding illuminates the complex relationship between the immune system and cognitive development, an increasingly important area of research that U.Va. has helped pioneer.

“This is a very interesting study, showing, probably for the first time, the link between peripheral cytokine levels and improved cognitive development in humans,” said Jonathan Kipnis, a professor of neuroscience and director of U.Va.’s Center for Brain Immunology & Glia. “What is of the most interest and of a great novelty is the fact that [inflammation-fighting cytokines] have positive correlation with cognitive function. My lab published results showing that these IL-4 cytokines are required for proper brain function in mice, and this work from Dr. Petri’s lab completely independently shows similar correlation in humans.

“I hope the scientific community will appreciate how dramatic the effects of the immune system are on the central nervous system and will invest more efforts in studying and better understanding these complex and intriguing interactions between the body’s two major systems.”

(Source: news.virginia.edu)

Filed under brain development cytokines immune system nervous system neuroscience science

159 notes

Researchers reveal the dual role of brain glycogen
In 2007, in an article published in Nature Neuroscience, scientists at the Institute for Research in Biomedicine (IRB Barcelona) headed by Joan Guinovart, an authority on glycogen metabolism, suggested that in Lafora Disease (LD), a rare and fatal neurodegenerative condition that affects adolescents, neurons die as a result of the accumulation of glycogen—chains of glucose. They went on to propose that this accumulation is the root cause of this disease.
The breakthrough of this paper was two-sided: first, the researchers established a possible cause of LD and therefore were able to point to a plausible therapeutic target, and second, they discovered that neurons have the capacity to store glycogen—an observation that had never been made—and that this accumulation was toxic.
Other reports defended a different theory and upheld that the glycogen deposits were not cause by the neurodegeneration but were a consequence of another, more important, cell imbalance, such as a down deregulation of autophagy—the cell recycling and cleaning programme. In several articles, Guinovart’s “Metabolic engineering and diabetes therapy” group has recently brought to light evidence of the toxicity of glycogen deposits for LD patients, and has now provided irrefutable data.
In an article published at the beginning of February in Human Molecular Genetics, with the research associate Jordi Duran as first author, the scientists show that in LD the accumulation of glycogen directly causes neuronal death and triggers cell imbalances such a decrease in autophagy and synaptic failure. All these alterations lead to the symptoms of LD, such as epilepsy.
Glycogen, a Trojan horse for neurons?
There was still a greater mystery to be solved. Was glycogen synthase truly a Trojan horse for neurons, as apparently established in the article in Nature Neuroscience? That is to say, was the accumulation of glycogen always fatal for cells, thus explaining why their glycogen synthesis machinery is silenced? The inevitable question was then why these cells had such machinery.
In another paper published in Journal of Cerebral Blood Flow & Metabolism, part of the Nature Group, the researchers provided the first evidence that neurons constantly store glycogen but in a different way: accumulating small amounts and using it as quickly as it becomes available. In this regard, the scientists set up new, more sensitive, analytical techniques to confirm that the machinery responsible for glycogen synthesis and degradation existed. In summary, they showed that, in small amounts, glycogen is beneficial for neurons.
“For example, while the liver accumulates glycogen in large amounts and releases it slowly to maintain blood sugar levels, above all when we sleep, neurons synthesize and degrade small amounts of this polysaccharide continuously. They do not use it as an energy store but as a rapid and small, but constant, source of energy,” explains Guinovart, also senior professor at the University of Barcelona (UB).
To observe the action of glycogen, the scientists forced cultured mouse neurons to survive under oxygen depletion. They demonstrated that the first cells to die were those in which the capacity to synthesise glycogen had been removed. The same experiments were performed in collaboration with Marco Milán’s “Development and growth control” group in the in vivo model of the fruit fly Drosophila melanogaster. These tests led to the same conclusions.
The researchers postulated that glycogen is a lifeguard under oxygen depletion, a condition that leads the brains to shut down and that often occurs at birth and in cerebral infarctions in adults, which leads to severe consequences, such a cerebral paralysis.
“It is the first function of glycogen that we have discovered in neurons, but we still have to identify its function in normal conditions and establish how the mechanism works,” says Jordi Duran. Postdoctoral researcher Isabel Saez is the first author of the article out today, which involved the collaboration of ICREA Research Professor Marco Milán’s lab.
The beneficial and toxic roles of brain glycogen are currently the focus of main research lines conducted by Joan Guinovart’s lab.

Researchers reveal the dual role of brain glycogen

In 2007, in an article published in Nature Neuroscience, scientists at the Institute for Research in Biomedicine (IRB Barcelona) headed by Joan Guinovart, an authority on glycogen metabolism, suggested that in Lafora Disease (LD), a rare and fatal neurodegenerative condition that affects adolescents, neurons die as a result of the accumulation of glycogen—chains of glucose. They went on to propose that this accumulation is the root cause of this disease.

The breakthrough of this paper was two-sided: first, the researchers established a possible cause of LD and therefore were able to point to a plausible therapeutic target, and second, they discovered that neurons have the capacity to store glycogen—an observation that had never been made—and that this accumulation was toxic.

Other reports defended a different theory and upheld that the glycogen deposits were not cause by the neurodegeneration but were a consequence of another, more important, cell imbalance, such as a down deregulation of autophagy—the cell recycling and cleaning programme. In several articles, Guinovart’s “Metabolic engineering and diabetes therapy” group has recently brought to light evidence of the toxicity of glycogen deposits for LD patients, and has now provided irrefutable data.

In an article published at the beginning of February in Human Molecular Genetics, with the research associate Jordi Duran as first author, the scientists show that in LD the accumulation of glycogen directly causes neuronal death and triggers cell imbalances such a decrease in autophagy and synaptic failure. All these alterations lead to the symptoms of LD, such as epilepsy.

Glycogen, a Trojan horse for neurons?

There was still a greater mystery to be solved. Was glycogen synthase truly a Trojan horse for neurons, as apparently established in the article in Nature Neuroscience? That is to say, was the accumulation of glycogen always fatal for cells, thus explaining why their glycogen synthesis machinery is silenced? The inevitable question was then why these cells had such machinery.

In another paper published in Journal of Cerebral Blood Flow & Metabolism, part of the Nature Group, the researchers provided the first evidence that neurons constantly store glycogen but in a different way: accumulating small amounts and using it as quickly as it becomes available. In this regard, the scientists set up new, more sensitive, analytical techniques to confirm that the machinery responsible for glycogen synthesis and degradation existed. In summary, they showed that, in small amounts, glycogen is beneficial for neurons.

“For example, while the liver accumulates glycogen in large amounts and releases it slowly to maintain blood sugar levels, above all when we sleep, neurons synthesize and degrade small amounts of this polysaccharide continuously. They do not use it as an energy store but as a rapid and small, but constant, source of energy,” explains Guinovart, also senior professor at the University of Barcelona (UB).

To observe the action of glycogen, the scientists forced cultured mouse neurons to survive under oxygen depletion. They demonstrated that the first cells to die were those in which the capacity to synthesise glycogen had been removed. The same experiments were performed in collaboration with Marco Milán’s “Development and growth control” group in the in vivo model of the fruit fly Drosophila melanogaster. These tests led to the same conclusions.

The researchers postulated that glycogen is a lifeguard under oxygen depletion, a condition that leads the brains to shut down and that often occurs at birth and in cerebral infarctions in adults, which leads to severe consequences, such a cerebral paralysis.

“It is the first function of glycogen that we have discovered in neurons, but we still have to identify its function in normal conditions and establish how the mechanism works,” says Jordi Duran. Postdoctoral researcher Isabel Saez is the first author of the article out today, which involved the collaboration of ICREA Research Professor Marco Milán’s lab.

The beneficial and toxic roles of brain glycogen are currently the focus of main research lines conducted by Joan Guinovart’s lab.

Filed under glycogen lafora disease neurons neurodegeneration autophagy epilepsy neuroscience science

82 notes

3-D imaging sheds light on Apert Syndrome development
Three-dimensional imaging of two different mouse models of Apert Syndrome shows that cranial deformation begins before birth and continues, worsening with time, according to a team of researchers who studied mice to better understand and treat the disorder in humans.
Apert Syndrome is caused by mutations in FGFR2 — fibroblast growth factor receptor 2 — a gene, which usually produces a protein that functions in cell division, regulation of cell growth and maturation, formation of blood vessels, wound healing, and embryonic development. With certain mutations, this gene causes the bones in the skull to fuse together early, beginning in the fetus. These mutations also cause mid-facial deformation, a variety of neural, limb and tissue malformations and may lead to cognitive impairment.
Understanding the growth pattern of the head in an individual, the ability to anticipate where the bones will fuse and grow next, and using simulations “could contribute to improved patient-centered outcomes either through changes in surgical approach, or through more realistic modeling and expectation of surgical outcome,” the researchers said in today’s (Feb. 28) issue of BMC Developmental Biology.
Joan T. Richtsmeier, Distinguished Professor of Anthropology, Penn State, and her team looked at two sets of mice, each having a different mutation that causes Apert Syndrome in humans and causes similar cranial problems in the mice. They checked bone formation and the fusing of sutures, soft tissue that usually exists between bones n the skull, in the mice at 17.5 days after conception and at birth — 19 to 21 days after conception.
"It would be difficult, actually impossible, to observe and score the exact processes and timing of abnormal suture closure in humans as the disease is usually diagnosed after suture closure has occurred," said Richtsmeier. "With these mice, we can do this at the anatomical level by visualizing the sutures prenatally using micro-computed tomography — 3-D X-rays — or at the mechanistic level by using immunohistochemistry, or other approaches to see what the cells are doing as the sutures close."
The researchers found that both sets of mice differed in cranial formation from their littermates that were not carrying the mutation and that they differed from each other. They also found that the changes in suture closure in the head progressed from 17.5 days to birth, so that the heads of newborn mice looked very different at birth than they did when first imaged prenatally.
Apert syndrome also causes early closure of the sutures between bones in the face. Early fusion of bones of the skull and of the face makes it impossible for the head to grow in the typical fashion. The researchers found that the changed growth pattern contributes significantly to continuing skull deformation and facial deformation that is initiated prenatally and increases over time.
"Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood and into adulthood," said Richtsmeier. "These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features."
Using 3-D imaging, the researchers were able to estimate how the changes in the growth patterns caused by either of the two different mutations produced the head and facial deformities.
"If what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary is also sufficient," said Richtsmeier. "If it is not in at least some cases, then we need to be working towards therapies that can replace or further improve surgical outcomes."

3-D imaging sheds light on Apert Syndrome development

Three-dimensional imaging of two different mouse models of Apert Syndrome shows that cranial deformation begins before birth and continues, worsening with time, according to a team of researchers who studied mice to better understand and treat the disorder in humans.

Apert Syndrome is caused by mutations in FGFR2 — fibroblast growth factor receptor 2 — a gene, which usually produces a protein that functions in cell division, regulation of cell growth and maturation, formation of blood vessels, wound healing, and embryonic development. With certain mutations, this gene causes the bones in the skull to fuse together early, beginning in the fetus. These mutations also cause mid-facial deformation, a variety of neural, limb and tissue malformations and may lead to cognitive impairment.

Understanding the growth pattern of the head in an individual, the ability to anticipate where the bones will fuse and grow next, and using simulations “could contribute to improved patient-centered outcomes either through changes in surgical approach, or through more realistic modeling and expectation of surgical outcome,” the researchers said in today’s (Feb. 28) issue of BMC Developmental Biology.

Joan T. Richtsmeier, Distinguished Professor of Anthropology, Penn State, and her team looked at two sets of mice, each having a different mutation that causes Apert Syndrome in humans and causes similar cranial problems in the mice. They checked bone formation and the fusing of sutures, soft tissue that usually exists between bones n the skull, in the mice at 17.5 days after conception and at birth — 19 to 21 days after conception.

"It would be difficult, actually impossible, to observe and score the exact processes and timing of abnormal suture closure in humans as the disease is usually diagnosed after suture closure has occurred," said Richtsmeier. "With these mice, we can do this at the anatomical level by visualizing the sutures prenatally using micro-computed tomography — 3-D X-rays — or at the mechanistic level by using immunohistochemistry, or other approaches to see what the cells are doing as the sutures close."

The researchers found that both sets of mice differed in cranial formation from their littermates that were not carrying the mutation and that they differed from each other. They also found that the changes in suture closure in the head progressed from 17.5 days to birth, so that the heads of newborn mice looked very different at birth than they did when first imaged prenatally.

Apert syndrome also causes early closure of the sutures between bones in the face. Early fusion of bones of the skull and of the face makes it impossible for the head to grow in the typical fashion. The researchers found that the changed growth pattern contributes significantly to continuing skull deformation and facial deformation that is initiated prenatally and increases over time.

"Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood and into adulthood," said Richtsmeier. "These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features."

Using 3-D imaging, the researchers were able to estimate how the changes in the growth patterns caused by either of the two different mutations produced the head and facial deformities.

"If what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary is also sufficient," said Richtsmeier. "If it is not in at least some cases, then we need to be working towards therapies that can replace or further improve surgical outcomes."

Filed under apert syndrome cranial deformation 3d imaging FGFR2 genetic mutation neuroscience science

254 notes

Muscle-controlling Neurons Know When They Mess Up 
Whether it is playing a piano sonata or acing a tennis serve, the brain needs to orchestrate precise, coordinated control over the body’s many muscles. Moreover, there needs to be some kind of feedback from the senses should any of those movements go wrong. Neurons that coordinate those movements, known as Purkinje cells, and ones that provide feedback when there is an error or unexpected sensation, known as climbing fibers, work in close concert to fine-tune motor control.   
A team of researchers from the University of Pennsylvania and Princeton University has now begun to unravel the decades-spanning paradox concerning how this feedback system works.
At the heart of this puzzle is the fact that while climbing fibers send signals to Purkinje cells when there is an error to report, they also fire spontaneously, about once a second. There did not seem to be any mechanism by which individual Purkinje cells could detect a legitimate error signal from within this deafening noise of random firing. 
Using a microscopy technique that allowed the researchers to directly visualize the chemical signaling occurring between the climbing fibers and Purkinje cells of live, active mice, the Penn team has for the first time shown that there is a measurable difference between “true” and “false” signals.
This knowledge will be fundamental to future studies of fine motor control, particularly with regards to how movements can be improved with practice. 
The research was conducted by Javier Medina, assistant professor in the Department of Psychology in Penn’s School of Arts and Sciences, and Farzaneh Najafi, a graduate student in the Department of Biology. They collaborated with postdoctoral fellow Andrea Giovannucci and associate professor Samuel S. H. Wang of Princeton University.
It was published in the journal Cell Reports.
The cerebellum is one of the brain’s motor control centers. It contains thousands of Purkinje cells, each of which collects information from elsewhere in the brain and funnels it down to the muscle-triggering motor neurons. Each Purkinje cell receives messages from a climbing fiber, a type of neuron that extends from the brain stem and sends feedback about the associated muscles. 
“Climbing fibers are not just sensory neurons, however,” Medina said. “What makes climbing fibers interesting is that they don’t just say, ‘Something touched my face’; They say, ‘Something touched my face when I wasn’t expecting it.’ This is something that our brains do all the time, which explains why you can’t tickle yourself. There’s part of your brain that’s already expecting the sensation that will come from moving your fingers. But if someone else does it, the brain can’t predict it in the same way and it is that unexpectedness that leads to the tickling sensation.”
Not only does the climbing fiber feedback system for unexpected sensations serve as an alert to potential danger — unstable footing, an unseen predator brushing by — it helps the brain improve when an intended action doesn’t go as planned.    
“The sensation of muscles that don’t move in the way the Purkinje cells direct them to also counts as unexpected, which is why some people call climbing fibers ‘error cells,’” Medina said. “When you mess up your tennis swing, they’re saying to the Purkinje cells, ‘Stop! Change! What you’re doing is not right!’ That’s where they help you learn how to correct your movements.
“When the Purkinje cells get these signals from climbing fibers, they change by adding or tweaking the strength of the connections coming in from the rest of the brain to their dendrites. And because the Purkinje cells are so closely connected to the motor neurons, the changes to those synapses are going to result in changes to the movements that Purkinje cell controls.”
This is a phenomenon known as neuroplasticity, and it is fundamental for learning new behaviors or improving on them. That new neural pathways form in response to error signals from the climbing fibers allows the cerebellum to send better instructions to motor neurons the next time the same action is attempted.
The paradox that faced neuroscientists was that these climbing fibers, like many other neurons, are spontaneously activated. About once every second, they send a signal to their corresponding Purkinje cell, whether or not there were any unexpected stimuli or errors to report.
“So if you’re the Purkinje cell,” Medina said, “how are you ever going to tell the difference between signals that are spontaneous, meaning you don’t need to change anything, and ones that really need to be paid attention to?”
Medina and his colleagues devised an experiment to test whether there was a measurable difference between legitimate and spontaneous signals from the climbing fibers. In their study, the researchers had mice walk on treadmills while their heads were kept stationary. This allowed the researchers to blow random puffs of air at their faces, causing them to blink, and to use a non-invasive microscopy technique to look at how the relevant Purkinje cells respond.
The technique, two-photon microscopy, uses an infrared laser and a reflective dye to look deep into living tissue, providing information on both structure and chemical composition. Neural signals are transmitted within neurons by changing calcium concentrations, so the researchers used this technique to measure the amount of calcium contained within the Purkinje cells in real time.
Because the random puffs of air were unexpected stimuli for the mice, the researchers could directly compare the differences between legitimate and spontaneous signals in the eyelid-related Purkinje cells that made the mice blink.
“What we have found is that the Purkinje cell fills with more calcium when its corresponding climbing fiber sends a signal associated with that kind of sensory input, rather than a spontaneous one,” Medina said. “This was a bit of a surprise for us because climbing fibers had been thought of as ‘all or nothing’ for more than 50 years now.”
The mechanism that allows individual Purkinje cells to differentiate between the two kinds of climbing fiber signals is an open question. These signals come in bursts, so the number and spacing of the electrical impulses from climbing fiber to Purkinje cell might be significant. Medina and his colleagues also suspect that another mechanism is at play: Purkinje cells might respond differently when a signal from a climbing fiber is synchronized with signals coming elsewhere from the brain.   
Whether either or both of these explanations are confirmed, the fact that individual Purkinje cells are able to distinguish when their corresponding muscle neurons encounter an error must be taken into account in future studies of fine motor control. This understanding could lead to new research into the fundamentals of neuroplasticity and learning.    
“Something that would be very useful for the brain is to have information not just about whether there was an error but how big the error was — whether the Purkinje cell needs to make a minor or major adjustment,” Medina said. “That sort of information would seem to be necessary for us to get very good at any kind of activity that requires precise control. Perhaps climbing fiber signals are not as ‘all-or-nothing’ as we all thought and can provide that sort of graded information”

Muscle-controlling Neurons Know When They Mess Up

Whether it is playing a piano sonata or acing a tennis serve, the brain needs to orchestrate precise, coordinated control over the body’s many muscles. Moreover, there needs to be some kind of feedback from the senses should any of those movements go wrong. Neurons that coordinate those movements, known as Purkinje cells, and ones that provide feedback when there is an error or unexpected sensation, known as climbing fibers, work in close concert to fine-tune motor control.   

A team of researchers from the University of Pennsylvania and Princeton University has now begun to unravel the decades-spanning paradox concerning how this feedback system works.

At the heart of this puzzle is the fact that while climbing fibers send signals to Purkinje cells when there is an error to report, they also fire spontaneously, about once a second. There did not seem to be any mechanism by which individual Purkinje cells could detect a legitimate error signal from within this deafening noise of random firing. 

Using a microscopy technique that allowed the researchers to directly visualize the chemical signaling occurring between the climbing fibers and Purkinje cells of live, active mice, the Penn team has for the first time shown that there is a measurable difference between “true” and “false” signals.

This knowledge will be fundamental to future studies of fine motor control, particularly with regards to how movements can be improved with practice. 

The research was conducted by Javier Medina, assistant professor in the Department of Psychology in Penn’s School of Arts and Sciences, and Farzaneh Najafi, a graduate student in the Department of Biology. They collaborated with postdoctoral fellow Andrea Giovannucci and associate professor Samuel S. H. Wang of Princeton University.

It was published in the journal Cell Reports.

The cerebellum is one of the brain’s motor control centers. It contains thousands of Purkinje cells, each of which collects information from elsewhere in the brain and funnels it down to the muscle-triggering motor neurons. Each Purkinje cell receives messages from a climbing fiber, a type of neuron that extends from the brain stem and sends feedback about the associated muscles. 

“Climbing fibers are not just sensory neurons, however,” Medina said. “What makes climbing fibers interesting is that they don’t just say, ‘Something touched my face’; They say, ‘Something touched my face when I wasn’t expecting it.’ This is something that our brains do all the time, which explains why you can’t tickle yourself. There’s part of your brain that’s already expecting the sensation that will come from moving your fingers. But if someone else does it, the brain can’t predict it in the same way and it is that unexpectedness that leads to the tickling sensation.”

Not only does the climbing fiber feedback system for unexpected sensations serve as an alert to potential danger — unstable footing, an unseen predator brushing by — it helps the brain improve when an intended action doesn’t go as planned.    

“The sensation of muscles that don’t move in the way the Purkinje cells direct them to also counts as unexpected, which is why some people call climbing fibers ‘error cells,’” Medina said. “When you mess up your tennis swing, they’re saying to the Purkinje cells, ‘Stop! Change! What you’re doing is not right!’ That’s where they help you learn how to correct your movements.

“When the Purkinje cells get these signals from climbing fibers, they change by adding or tweaking the strength of the connections coming in from the rest of the brain to their dendrites. And because the Purkinje cells are so closely connected to the motor neurons, the changes to those synapses are going to result in changes to the movements that Purkinje cell controls.”

This is a phenomenon known as neuroplasticity, and it is fundamental for learning new behaviors or improving on them. That new neural pathways form in response to error signals from the climbing fibers allows the cerebellum to send better instructions to motor neurons the next time the same action is attempted.

The paradox that faced neuroscientists was that these climbing fibers, like many other neurons, are spontaneously activated. About once every second, they send a signal to their corresponding Purkinje cell, whether or not there were any unexpected stimuli or errors to report.

“So if you’re the Purkinje cell,” Medina said, “how are you ever going to tell the difference between signals that are spontaneous, meaning you don’t need to change anything, and ones that really need to be paid attention to?”

Medina and his colleagues devised an experiment to test whether there was a measurable difference between legitimate and spontaneous signals from the climbing fibers. In their study, the researchers had mice walk on treadmills while their heads were kept stationary. This allowed the researchers to blow random puffs of air at their faces, causing them to blink, and to use a non-invasive microscopy technique to look at how the relevant Purkinje cells respond.

The technique, two-photon microscopy, uses an infrared laser and a reflective dye to look deep into living tissue, providing information on both structure and chemical composition. Neural signals are transmitted within neurons by changing calcium concentrations, so the researchers used this technique to measure the amount of calcium contained within the Purkinje cells in real time.

Because the random puffs of air were unexpected stimuli for the mice, the researchers could directly compare the differences between legitimate and spontaneous signals in the eyelid-related Purkinje cells that made the mice blink.

“What we have found is that the Purkinje cell fills with more calcium when its corresponding climbing fiber sends a signal associated with that kind of sensory input, rather than a spontaneous one,” Medina said. “This was a bit of a surprise for us because climbing fibers had been thought of as ‘all or nothing’ for more than 50 years now.”

The mechanism that allows individual Purkinje cells to differentiate between the two kinds of climbing fiber signals is an open question. These signals come in bursts, so the number and spacing of the electrical impulses from climbing fiber to Purkinje cell might be significant. Medina and his colleagues also suspect that another mechanism is at play: Purkinje cells might respond differently when a signal from a climbing fiber is synchronized with signals coming elsewhere from the brain.   

Whether either or both of these explanations are confirmed, the fact that individual Purkinje cells are able to distinguish when their corresponding muscle neurons encounter an error must be taken into account in future studies of fine motor control. This understanding could lead to new research into the fundamentals of neuroplasticity and learning.    

“Something that would be very useful for the brain is to have information not just about whether there was an error but how big the error was — whether the Purkinje cell needs to make a minor or major adjustment,” Medina said. “That sort of information would seem to be necessary for us to get very good at any kind of activity that requires precise control. Perhaps climbing fiber signals are not as ‘all-or-nothing’ as we all thought and can provide that sort of graded information”

Filed under purkinje cells motor movement neuroplasticity cerebellum motor neurons neuroscience science

268 notes

Researchers Identify Brain Differences Linked to Insomnia

Johns Hopkins researchers report that people with chronic insomnia show more plasticity and activity than good sleepers in the part of the brain that controls movement.

"Insomnia is not a nighttime disorder," says study leader Rachel E. Salas, M.D., an assistant professor of neurology at the Johns Hopkins University School of Medicine. "It’s a 24-hour brain condition, like a light switch that is always on. Our research adds information about differences in the brain associated with it."

image

Salas and her team, reporting in the March issue of the journal Sleep, found that the motor cortex in those with chronic insomnia was more adaptable to change - more plastic - than in a group of good sleepers. They also found more “excitability” among neurons in the same region of the brain among those with chronic insomnia, adding evidence to the notion that insomniacs are in a constant state of heightened information processing that may interfere with sleep.

Researchers say they hope their study opens the door to better diagnosis and treatment of the most common and often intractable sleep disorder that affects an estimated 15 percent of the United States population.

To conduct the study, Salas and her colleagues from the Department of Psychiatry and Behavioral Sciences and the Department of Physical Medicine and Rehabilitation used transcranial magnetic stimulation (TMS), which painlessly and noninvasively delivers electromagnetic currents to precise locations in the brain and can temporarily and safely disrupt the function of the targeted area. TMS is approved by the U.S. Food and Drug Administration to treat some patients with depression by stimulating nerve cells in the region of the brain involved in mood control.

The study included 28 adult participants - 18 who suffered from insomnia for a year or more and 10 considered good sleepers with no reports of trouble sleeping. Each participant was outfitted with electrodes on their dominant thumb as well as an accelerometer to measure the speed and direction of the thumb.

The researchers then gave each subject 65 electrical pulses using TMS, stimulating areas of the motor cortex and watching for involuntary thumb movements linked to the stimulation. Subsequently, the researchers trained each participant for 30 minutes, teaching them to move their thumb in the opposite direction of the original involuntary movement. They then introduced the electrical pulses once again.

The idea was to measure the extent to which participants’ brains could learn to move their thumbs involuntarily in the newly trained direction. The more the thumb was able to move in the new direction, the more likely their motor cortexes could be identified as more plastic.

Because lack of sleep at night has been linked to decreased memory and concentration during the day, Salas and her colleagues suspected that the brains of good sleepers could be more easily retrained. The results, however, were the opposite. The researchers found much more plasticity in the brains of those with chronic insomnia.

Salas says the origins of increased plasticity in insomniacs are unclear, and it is not known whether the increase is the cause of insomnia. It is also unknown whether this increased plasticity is beneficial, the source of the problem or part of a compensatory mechanism to address the consequences of sleep deprivation associated with chronic insomnia. Patients with chronic phantom pain after limb amputation and with dystonia, a neurological movement disorder in which sustained muscle contractions cause twisting and repetitive movements, also have increased brain plasticity in the motor cortex, but to detrimental effect.

Salas says it is possible that the dysregulation of arousal described in chronic insomnia - increased metabolism, increased cortisol levels, constant worrying - might be linked to increased plasticity in some way. Diagnosing insomnia is solely based on what the patient reports to the provider; there is no objective test. Neither is there a single treatment that works for all people with insomnia. Treatment can be a hit or miss in many patients, Salas says.

She says this study shows that TMS may be able to play a role in diagnosing insomnia, and more importantly, she says, potentially prove to be a treatment for insomnia, perhaps through reducing excitability.

(Source: hopkinsmedicine.org)

Filed under insomnia plasticity motor cortex sleep transcranial magnetic stimulation neuroscience science

119 notes

Scientists wake up to causes of sleep disruption in Alzheimer’s disease
Being awake at night and dozing during the day can be a distressing early symptom of Alzheimer’s disease, but how the disease disrupts our biological clocks to cause these symptoms has remained elusive.
Now, scientists from Cambridge have discovered that in fruit flies with Alzheimer’s the biological clock is still ticking but has become uncoupled from the sleep-wake cycle it usually regulates. The findings – published in Disease Models & Mechanisms – could help develop more effective ways to improve sleep patterns in people with the disease.
People with Alzheimer’s often have poor biological rhythms, something that is a burden for both patients and their carers. Periods of sleep become shorter and more fragmented, resulting in periods of wakefulness at night and snoozing during the day. They can also become restless and agitated in the late afternoon and early evening, something known as ‘sundowning’.
Biological clocks go hand in hand with life, and are found in everything from single celled organisms to fruit flies and humans. They are vital because they allow organisms to synchronise their biology to the day-night changes in their environments.
Until now, however, it has been unclear how Alzheimer’s disrupts the biological clock. According to Dr Damian Crowther of Cambridge’s Department of Genetics, one of the study’s authors: “We wanted to know whether people with Alzheimer’s disease have a poor behavioural rhythm because they have a clock that’s stopped ticking or they have stopped responding to the clock.”
The team worked with fruit flies – a key species for studying Alzheimer’s. Evidence suggests that the A-beta peptide, a protein, is behind at least the initial stages of the disease in humans. This has been replicated in fruit flies by introducing the human gene that produces this peptide.
Taking a group of healthy flies and a group with this feature of Alzheimer’s, the researchers studied sleep-wake patterns in the flies, and how well their biological clocks were working.
They measured sleep-wake patterns by fitting a small infrared beam, similar to movement sensors in burglar alarms, to the glass tubes housing the flies. When the flies were awake and moving, they broke the beam and these breaks in the beam were counted and recorded.
To study the flies’ biological clocks, the researchers attached the protein luciferase – an enzyme that emits light – to one of the proteins that forms part of the biological clock. Levels of the protein rise and fall during the night and day, and the glowing protein provided a way of tracing the flies’ internal clock.
"This lets us see the brain glowing brighter at night and less during the day, and that’s the biological clock shown as a glowing brain. It’s beautiful to be able to study first hand in the same organism the molecular working of the clock and the corresponding behaviours," Dr Crowther said.
They found that healthy flies were active during the day and slept at night, whereas those with Alzheimer’s sleep and wake randomly. Crucially, however, the diurnal patterns of the luciferase-tagged protein were the same in both healthy and diseased flies, showing that the biological clock still ticks in flies with Alzheimer’s.
"Until now, the prevailing view was that Alzheimer’s destroyed the biological clock," said Crowther.
"What we have shown in flies with Alzheimer’s is that the clock is still ticking but is being ignored by other parts of the brain and body that govern behaviour. If we can understand this, it could help us develop new therapies to tackle sleep disturbances in people with Alzheimer’s."
Dr Simon Ridley, Head of Research at Alzheimer’s Research UK, who helped to fund the study, said: “Understanding the biology behind distressing symptoms like sleep problems is important to guide the development of new approaches to manage or treat them. This study sheds more light on the how features of Alzheimer’s can affect the molecular mechanisms controlling sleep-wake cycles in flies.
"We hope these results can guide further studies in people to ensure that progress is made for the half a million people in the UK with the disease."

Scientists wake up to causes of sleep disruption in Alzheimer’s disease

Being awake at night and dozing during the day can be a distressing early symptom of Alzheimer’s disease, but how the disease disrupts our biological clocks to cause these symptoms has remained elusive.

Now, scientists from Cambridge have discovered that in fruit flies with Alzheimer’s the biological clock is still ticking but has become uncoupled from the sleep-wake cycle it usually regulates. The findings – published in Disease Models & Mechanisms – could help develop more effective ways to improve sleep patterns in people with the disease.

People with Alzheimer’s often have poor biological rhythms, something that is a burden for both patients and their carers. Periods of sleep become shorter and more fragmented, resulting in periods of wakefulness at night and snoozing during the day. They can also become restless and agitated in the late afternoon and early evening, something known as ‘sundowning’.

Biological clocks go hand in hand with life, and are found in everything from single celled organisms to fruit flies and humans. They are vital because they allow organisms to synchronise their biology to the day-night changes in their environments.

Until now, however, it has been unclear how Alzheimer’s disrupts the biological clock. According to Dr Damian Crowther of Cambridge’s Department of Genetics, one of the study’s authors: “We wanted to know whether people with Alzheimer’s disease have a poor behavioural rhythm because they have a clock that’s stopped ticking or they have stopped responding to the clock.”

The team worked with fruit flies – a key species for studying Alzheimer’s. Evidence suggests that the A-beta peptide, a protein, is behind at least the initial stages of the disease in humans. This has been replicated in fruit flies by introducing the human gene that produces this peptide.

Taking a group of healthy flies and a group with this feature of Alzheimer’s, the researchers studied sleep-wake patterns in the flies, and how well their biological clocks were working.

They measured sleep-wake patterns by fitting a small infrared beam, similar to movement sensors in burglar alarms, to the glass tubes housing the flies. When the flies were awake and moving, they broke the beam and these breaks in the beam were counted and recorded.

To study the flies’ biological clocks, the researchers attached the protein luciferase – an enzyme that emits light – to one of the proteins that forms part of the biological clock. Levels of the protein rise and fall during the night and day, and the glowing protein provided a way of tracing the flies’ internal clock.

"This lets us see the brain glowing brighter at night and less during the day, and that’s the biological clock shown as a glowing brain. It’s beautiful to be able to study first hand in the same organism the molecular working of the clock and the corresponding behaviours," Dr Crowther said.

They found that healthy flies were active during the day and slept at night, whereas those with Alzheimer’s sleep and wake randomly. Crucially, however, the diurnal patterns of the luciferase-tagged protein were the same in both healthy and diseased flies, showing that the biological clock still ticks in flies with Alzheimer’s.

"Until now, the prevailing view was that Alzheimer’s destroyed the biological clock," said Crowther.

"What we have shown in flies with Alzheimer’s is that the clock is still ticking but is being ignored by other parts of the brain and body that govern behaviour. If we can understand this, it could help us develop new therapies to tackle sleep disturbances in people with Alzheimer’s."

Dr Simon Ridley, Head of Research at Alzheimer’s Research UK, who helped to fund the study, said: “Understanding the biology behind distressing symptoms like sleep problems is important to guide the development of new approaches to manage or treat them. This study sheds more light on the how features of Alzheimer’s can affect the molecular mechanisms controlling sleep-wake cycles in flies.

"We hope these results can guide further studies in people to ensure that progress is made for the half a million people in the UK with the disease."

Filed under alzheimer's disease circadian rhythms sleep fruit flies neuroscience science

220 notes

Why do some neurons respond so selectively to words, objects and faces?

So why do neurons respond in this remarkable way? A new study by Professor Jeff Bowers and colleagues at the University of Bristol argues that highly selective neural representations are well suited to co-activating multiple things, such as words, objects and faces, at the same time in short-term memory. 

image

The researchers trained an artificial neural network to remember words in short-term memory. Like a brain, the network was composed of a set of interconnected units that activated in response to inputs; the network ‘learnt’ by changing the strength of connections between units. The researchers then recorded the activation of the units in response to a number of different words.

When the network was trained to store one word at a time in short-term memory, it learned highly distributed codes such that each unit responded to many different words. However, when it was trained to store multiple words at the same time in short-term memory it learned highly selective (‘grandmother cell’) units – that is, after training, single units responded to one word but not any other. This is much like the neurons in the cortex that respond to one face amongst many.

Why did the network learn such highly specific representations when trained to co-activate multiple words at the same time? Professor Bowers and colleagues argue that the non-selective representations can support memory for a single word, given that a pattern of activation across many non-selective units can uniquely represent a specific word. However, when multiple patterns are mixed together, the resulting blend pattern is often ambiguous (the so-called ‘superposition catastrophe’).

This ambiguity is easily avoided, however, when the network learns to represent words in a highly selective manner, for example, if one unit codes for the word RACHEL, another for MONICA, and yet another JOEY, there is no ambiguity when the three units are co-activated.

Professor Bowers said: “Our research provides a possible explanation for the discovery that single neurons in the cortex respond to information in a highly selective manner. It’s possible that the cortex learns highly selective codes in order to support short-term memory.”

The study is published in Psychological Review.

(Source: bristol.ac.uk)

Filed under neural networks grandmother cells neurons language memory STM psychology neuroscience science

free counters