Neuroscience

Articles and news from the latest research reports.

496 notes

Researchers Show How Lost Sleep Leads to Lost Neurons
Most people appreciate that not getting enough sleep impairs cognitive performance. For the chronically sleep-deprived such as shift workers, students, or truckers, a common strategy is simply to catch up on missed slumber on the weekends. According to common wisdom, catch up sleep repays one’s “sleep debt,” with no lasting effects. But a new Penn Medicine study shows disturbing evidence that chronic sleep loss may be more serious than previously thought and may even lead to irreversible physical damage to and loss of brain cells. The research is published today in The Journal of Neuroscience.
Using a mouse model of chronic sleep loss, Sigrid Veasey, MD, associate professor of Medicine and a member of the Center for Sleep and Circadian Neurobiology at the Perelman School of Medicine and collaborators from Peking University, have determined that extended wakefulness is linked to injury to, and loss of, neurons that are essential for alertness and optimal cognition, the locus coeruleus (LC) neurons. 
"In general, we’ve always assumed full recovery of cognition following short- and long-term sleep loss," Veasey says. "But some of the research in humans has shown that attention span and several other aspects of cognition may not normalize even with three days of recovery sleep, raising the question of lasting injury in the brain. We wanted to figure out exactly whether chronic sleep loss injures neurons, whether the injury is reversible, and which neurons are involved."
Mice were examined following periods of normal rest, short wakefulness, or extended wakefulness, modeling a shift worker’s typical sleep pattern. The Veasey lab found that in response to short-term sleep loss, LC neurons upregulate the sirtuin type 3 (SirT3) protein, which is important for mitochondrial energy production and redox responses, and protect the neurons from metabolic injury. SirT3 is essential across short-term sleep loss to maintain metabolic homeostasis, but in extended wakefulness, the SirT3 response is missing. After several days of shift worker sleep patterns, LC neurons in the mice began to display reduced SirT3, increased cell death, and the mice lost 25 percent of these neurons.
"This is the first report that sleep loss can actually result in a loss of neurons," Veasey notes. Particularly intriguing is, that the findings suggest that mitochondria in LC neurons respond to sleep loss and can adapt to short-term sleep loss but not to extended wake. This raises the possibility that somehow increasing SirT3 levels in the mitochondria may help rescue neurons or protect them across chronic or extended sleep loss. The study also demonstrates the importance of sleep for restoring metabolic homeostasis in mitochondria in the LC neurons and possibly other important brain areas, to ensure their optimal functioning during waking hours.
Veasey stresses that more work needs to be done to establish whether a similar phenomenon occurs in humans and to determine what durations of wakefulness place individuals at risk of neural injury. “In light of the role for SirT3 in the adaptive response to sleep loss, the extent of neuronal injury may vary across individuals. Specifically, aging, diabetes, high-fat diet and sedentary lifestyle may all reduce SirT3. If cells in individuals, including neurons, have reduced SirT3 prior to sleep loss, these individuals may be set up for greater risk of injury to their nerve cells.”
The next step will be putting the SirT3 model to the test. “We can now overexpress SirT3 in LC neurons,” explains Veasey.  “If we can show that we can protect the cells and wakefulness, then we’re launched in the direction of a promising therapeutic target for millions of shift workers.” 
The team also plans to examine shift workers post-mortem for evidence of increased LC neuron loss and signs of neurodegenerative disorders such as Alzheimer’s and Parkinson’s, since some previous mouse models have shown that lesions or injury to LC neurons can accelerate the course of those diseases. While not directly causing theses diseases, “injuring LC neurons due to sleep loss could potentially facilitate or accelerate neurodegeneration in individuals who already have these disorders,” Veasey says.
While more research will be needed to settle these questions, the present study provides another confirmation of a rapidly growing scientific consensus:  sleep is more important than was previously believed. In the past, Veasey observes, “No one really thought that the brain could be irreversibly injured from sleep loss.”  It’s now clear that it can be.

Researchers Show How Lost Sleep Leads to Lost Neurons

Most people appreciate that not getting enough sleep impairs cognitive performance. For the chronically sleep-deprived such as shift workers, students, or truckers, a common strategy is simply to catch up on missed slumber on the weekends. According to common wisdom, catch up sleep repays one’s “sleep debt,” with no lasting effects. But a new Penn Medicine study shows disturbing evidence that chronic sleep loss may be more serious than previously thought and may even lead to irreversible physical damage to and loss of brain cells. The research is published today in The Journal of Neuroscience.

Using a mouse model of chronic sleep loss, Sigrid Veasey, MD, associate professor of Medicine and a member of the Center for Sleep and Circadian Neurobiology at the Perelman School of Medicine and collaborators from Peking University, have determined that extended wakefulness is linked to injury to, and loss of, neurons that are essential for alertness and optimal cognition, the locus coeruleus (LC) neurons. 

"In general, we’ve always assumed full recovery of cognition following short- and long-term sleep loss," Veasey says. "But some of the research in humans has shown that attention span and several other aspects of cognition may not normalize even with three days of recovery sleep, raising the question of lasting injury in the brain. We wanted to figure out exactly whether chronic sleep loss injures neurons, whether the injury is reversible, and which neurons are involved."

Mice were examined following periods of normal rest, short wakefulness, or extended wakefulness, modeling a shift worker’s typical sleep pattern. The Veasey lab found that in response to short-term sleep loss, LC neurons upregulate the sirtuin type 3 (SirT3) protein, which is important for mitochondrial energy production and redox responses, and protect the neurons from metabolic injury. SirT3 is essential across short-term sleep loss to maintain metabolic homeostasis, but in extended wakefulness, the SirT3 response is missing. After several days of shift worker sleep patterns, LC neurons in the mice began to display reduced SirT3, increased cell death, and the mice lost 25 percent of these neurons.

"This is the first report that sleep loss can actually result in a loss of neurons," Veasey notes. Particularly intriguing is, that the findings suggest that mitochondria in LC neurons respond to sleep loss and can adapt to short-term sleep loss but not to extended wake. This raises the possibility that somehow increasing SirT3 levels in the mitochondria may help rescue neurons or protect them across chronic or extended sleep loss. The study also demonstrates the importance of sleep for restoring metabolic homeostasis in mitochondria in the LC neurons and possibly other important brain areas, to ensure their optimal functioning during waking hours.

Veasey stresses that more work needs to be done to establish whether a similar phenomenon occurs in humans and to determine what durations of wakefulness place individuals at risk of neural injury. “In light of the role for SirT3 in the adaptive response to sleep loss, the extent of neuronal injury may vary across individuals. Specifically, aging, diabetes, high-fat diet and sedentary lifestyle may all reduce SirT3. If cells in individuals, including neurons, have reduced SirT3 prior to sleep loss, these individuals may be set up for greater risk of injury to their nerve cells.”

The next step will be putting the SirT3 model to the test. “We can now overexpress SirT3 in LC neurons,” explains Veasey.  “If we can show that we can protect the cells and wakefulness, then we’re launched in the direction of a promising therapeutic target for millions of shift workers.” 

The team also plans to examine shift workers post-mortem for evidence of increased LC neuron loss and signs of neurodegenerative disorders such as Alzheimer’s and Parkinson’s, since some previous mouse models have shown that lesions or injury to LC neurons can accelerate the course of those diseases. While not directly causing theses diseases, “injuring LC neurons due to sleep loss could potentially facilitate or accelerate neurodegeneration in individuals who already have these disorders,” Veasey says.

While more research will be needed to settle these questions, the present study provides another confirmation of a rapidly growing scientific consensus:  sleep is more important than was previously believed. In the past, Veasey observes, “No one really thought that the brain could be irreversibly injured from sleep loss.”  It’s now clear that it can be.

Filed under locus coeruleus neurons sleep sleep loss sleep deprivation oxidative stress neuroscience science

311 notes

Out of mind, out of sight: suppressing unwanted memories reduces their unconscious influence on behaviour 



The study, part-funded by the Medical Research Council (MRC) and published online in PNAS, challenges the idea that suppressed memories remain fully preserved in the brain’s unconscious, allowing them to be inadvertently expressed in someone’s behaviour. The results of the study suggest instead that the act of suppressing intrusive memories helps to disrupt traces of the memories in the parts of the brain responsible for sensory processing.
The team at the MRC Cognition and Brain Sciences Unit and the University of Cambridge’s Behavioural and Clinical Neuroscience Institute (BCNI) have examined how suppression affects a memory’s unconscious influences in an experiment that focused on suppression of visual memories, as intrusive unwanted memories are often visual in nature.  
After a trauma, most people report intrusive memories or images, and people will often try to push these intrusions from their mind, as a way to cope. Importantly, the frequency of intrusive memories decreases over time for most people. It is critical to understand how the healthy brain reduces these intrusions and prevents unwanted images from entering consciousness, so that researchers can better understand how these mechanisms may go awry in conditions such as post-traumatic stress disorder.
Participants were asked to learn a set of word-picture pairs so that, when presented with the word as a reminder, an image of the object would spring to mind. After learning these pairs, brain activity was recorded using functional magnetic resonance imaging (fMRI) while participants either thought of the object image when given its reminder word, or instead tried to stop the memory of the picture from entering their mind.
The researchers studied whether suppressing visual memories had altered people’s ability to see the content of those memories when they re-encountered it again in their visual worlds. Without asking participants to consciously remember, they simply asked people to identify very briefly displayed objects that were made difficult to see by visual distortion. In general, under these conditions, people are better at identifying objects they have seen recently, even if they do not remember seeing the object before—an unconscious influence of memory. Strikingly, they found that suppressing visual memories made it harder for people to later see the suppressed object compared to other recently seen objects.  
Brain imaging showed that people’s difficulty seeing the suppressed object arose because suppressing the memory from conscious awareness in the earlier memory suppression phase had inhibited activity in visual areas of the brain, disrupting visual memories that usually help people to see better. In essence, suppressing something from the mind’s eye had made it harder to see in the world, because visual memories and seeing rely on the same brain areas: out of mind, out of sight.
Over the last decade, research has shown that suppressing unwanted memories reduces people’s ability to consciously remember the experiences. The researchers’ studies on memory suppression have been inspired, in part, by trying to understand how people adapt memory after psychological trauma. Although this may work as a coping mechanism to help people adapt to the trauma, there is the possibility that if the memory traces were able to exert an influence on unconscious behaviour, they could potentially exacerbate mental health problems. The idea that suppression leaves unconscious memories that undermine mental health has been influential for over a century, beginning with Sigmund Freud.
These findings challenge the assumption that, even when supressed, a memory remains fully intact, which can then be expressed unconsciously. Moreover, this discovery pinpoints the neurobiological mechanisms underlying how this suppression process happens, and could inform further research on uncontrolled ‘intrusive memories’, a classic characteristic of post-traumatic stress disorder.
Dr Michael Anderson, at the MRC Cognition and Brain Sciences Unit said: “While there has been a lot of research looking at how suppression affects conscious memory, few studies have examined the influence this process might have on unconscious expressions of memory in behaviour and thought. Surprisingly, the effects of suppression are not limited to conscious memory. Indeed, it is now clear, that the influence of suppression extends beyond areas of the brain associated with conscious memory, affecting perceptual traces that can influence us unconsciously. This may contribute to making unwanted visual memories less intrusive over time, and perhaps less vivid and detailed.”  
Dr Pierre Gagnepain, lead author at INSERM in France said: “Our memories can be slippery and hard to pin down. Out of hand and uncontrolled, their remembrance can haunt us and cause psychological troubles, as we see in PTSD. We were interested whether the brain can genuinely suppress memories in healthy participants, even at the most unconscious level, and how it might achieve this. The answer is that it can, though not all people were equally good at this. The better understanding of the neural mechanisms underlying this process arising from this study may help to better explain differences in how well people adapt to intrusive memories after a trauma”

Out of mind, out of sight: suppressing unwanted memories reduces their unconscious influence on behaviour

The study, part-funded by the Medical Research Council (MRC) and published online in PNAS, challenges the idea that suppressed memories remain fully preserved in the brain’s unconscious, allowing them to be inadvertently expressed in someone’s behaviour. The results of the study suggest instead that the act of suppressing intrusive memories helps to disrupt traces of the memories in the parts of the brain responsible for sensory processing.

The team at the MRC Cognition and Brain Sciences Unit and the University of Cambridge’s Behavioural and Clinical Neuroscience Institute (BCNI) have examined how suppression affects a memory’s unconscious influences in an experiment that focused on suppression of visual memories, as intrusive unwanted memories are often visual in nature.  

After a trauma, most people report intrusive memories or images, and people will often try to push these intrusions from their mind, as a way to cope. Importantly, the frequency of intrusive memories decreases over time for most people. It is critical to understand how the healthy brain reduces these intrusions and prevents unwanted images from entering consciousness, so that researchers can better understand how these mechanisms may go awry in conditions such as post-traumatic stress disorder.

Participants were asked to learn a set of word-picture pairs so that, when presented with the word as a reminder, an image of the object would spring to mind. After learning these pairs, brain activity was recorded using functional magnetic resonance imaging (fMRI) while participants either thought of the object image when given its reminder word, or instead tried to stop the memory of the picture from entering their mind.

The researchers studied whether suppressing visual memories had altered people’s ability to see the content of those memories when they re-encountered it again in their visual worlds. Without asking participants to consciously remember, they simply asked people to identify very briefly displayed objects that were made difficult to see by visual distortion. In general, under these conditions, people are better at identifying objects they have seen recently, even if they do not remember seeing the object before—an unconscious influence of memory. Strikingly, they found that suppressing visual memories made it harder for people to later see the suppressed object compared to other recently seen objects.  

Brain imaging showed that people’s difficulty seeing the suppressed object arose because suppressing the memory from conscious awareness in the earlier memory suppression phase had inhibited activity in visual areas of the brain, disrupting visual memories that usually help people to see better. In essence, suppressing something from the mind’s eye had made it harder to see in the world, because visual memories and seeing rely on the same brain areas: out of mind, out of sight.

Over the last decade, research has shown that suppressing unwanted memories reduces people’s ability to consciously remember the experiences. The researchers’ studies on memory suppression have been inspired, in part, by trying to understand how people adapt memory after psychological trauma. Although this may work as a coping mechanism to help people adapt to the trauma, there is the possibility that if the memory traces were able to exert an influence on unconscious behaviour, they could potentially exacerbate mental health problems. The idea that suppression leaves unconscious memories that undermine mental health has been influential for over a century, beginning with Sigmund Freud.

These findings challenge the assumption that, even when supressed, a memory remains fully intact, which can then be expressed unconsciously. Moreover, this discovery pinpoints the neurobiological mechanisms underlying how this suppression process happens, and could inform further research on uncontrolled ‘intrusive memories’, a classic characteristic of post-traumatic stress disorder.

Dr Michael Anderson, at the MRC Cognition and Brain Sciences Unit said: “While there has been a lot of research looking at how suppression affects conscious memory, few studies have examined the influence this process might have on unconscious expressions of memory in behaviour and thought. Surprisingly, the effects of suppression are not limited to conscious memory. Indeed, it is now clear, that the influence of suppression extends beyond areas of the brain associated with conscious memory, affecting perceptual traces that can influence us unconsciously. This may contribute to making unwanted visual memories less intrusive over time, and perhaps less vivid and detailed.”  

Dr Pierre Gagnepain, lead author at INSERM in France said: “Our memories can be slippery and hard to pin down. Out of hand and uncontrolled, their remembrance can haunt us and cause psychological troubles, as we see in PTSD. We were interested whether the brain can genuinely suppress memories in healthy participants, even at the most unconscious level, and how it might achieve this. The answer is that it can, though not all people were equally good at this. The better understanding of the neural mechanisms underlying this process arising from this study may help to better explain differences in how well people adapt to intrusive memories after a trauma”

Filed under memory neuroimaging visual memory mental health consciousness neuroscience science

187 notes

Researchers survey protein family that helps the brain form synapses
Neuroscientists and bioengineers at Stanford are working together to solve a mystery: How does nature construct the different types of synapses that connect neurons – the brain cells that monitor nerve impulses, control muscles and form thoughts.
In a paper published in the Proceedings of the National Academy of Sciences, Thomas C. Südhof, M.D., a professor of molecular and cellular physiology, and Stephen R. Quake, a professor of bioengineering, describe the diversity of the neurexin family of proteins.
Neurexins help to create the synapses that connect neurons. Think of synapses as switchboards or control panels that connect specific neurons when these brain cells must work together to perform a given task.
Neurexins play a key role in the formation and functioning of synaptic connections. Past human genetics studies have linked neurexins to a variety of cognitive disorders, such as autism and schizophrenia.
Südhof, the Avram Goldstein Professor in the School of Medicine and a winner of the 2013 Nobel Prize in Medicine, has spent years studying the many different forms, or isoforms, of neurexin proteins. He has postulated that different isoforms of neurexins may help to create different types of synaptic connections with distinct properties and functions, and thus enable neurons to do so many complex tasks.
But Südhof had no way to know exactly how many isoforms of neurexins existed until he sat down last year with Quake, the Lee Otterson Professor in the School of Engineering. Quake has pioneered new ways to sequence DNA – the master blueprint that nature follows when making proteins.
The study being published in PNAS represents the results of a year-long collaboration between neuroscientists and bioengineers to better understand how different neurexin proteins affect the behavior of synapses and, ultimately, normal brain functions and neurological conditions such as autism.
Though this will not be the last word on the subject, the findings help illuminate how the brain works and improve our understanding of neurological disorders.
Inside cells, a molecular machine unzips a double-stranded DNA molecule to create an RNA molecule. The RNA molecule is a copy of all the genetic instructions encoded into the DNA. But only specific regions of this RNA molecule contain instructions for making a specific protein. The cell has ways to remove the unnecessary regions and splice the protein-coding regions into a shorter RNA molecule called messenger RNA or mRNA. Thus, each mRNA contains the full instructions for making a specific protein.
To begin this experiment, Ozgun Gokce, a postdoctoral scholar in molecular and cellular physiology in Südhof’s lab, and Barbara Treutlein, a postdoctoral scholar in Quake’s lab, extracted brain cells from the prefrontal cortex of a mouse, then isolated the RNA contained in this tissue.
From this large pool of RNAs they then identified the mRNAs for neurexins. They ran those messenger molecules through equipment designed to read the entire long sequence of chemical instructions for making a specific isoform in the neurexin family of protein.
Quake’s lab is adept at using new instruments that allow researchers to read the long sequence of chemicals in an mRNA strand, allowing them to ascertain exactly what directions this messenger is carrying to the cell’s protein-making machinery.
“This experiment couldn’t have been done even a few years ago,” Treutlein explained.
The mRNAs for neurexins are very long chains of nucleotides – the chemicals that encode genetic information. Only recently have instruments been capable of reading the exact sequence of such long nucleotide chains.
The ability to read the entire sequence of each mRNA was essential because neurexins have 25 constituent parts. But not all of these parts are used each time neurons produce a copy of the protein. Isoforms of neurexin have different combinations of these 25 possible parts. This experiment was designed to discover how many isoforms of neurexin existed and how prevalent each of these isoforms was.
The researchers analyzed more than 25,000 full-length neurexin mRNAs. They found 450 variants. Each variant omitted one or more of the 25 possible components. Most of these isoforms occurred infrequently. A handful accounted for the predominant isoforms.
Although the Stanford scientists sequenced 25,000 mRNAs to discover 450 variants, they believe that if they were to sequence even more mRNAs they would discover more isoforms – their estimate is that at least 2,500 isoforms of the neurexin family exist.
“The fact that we see so many isoforms supports the theory that these protein variants contribute to the huge diversity of synaptic connections that neuroscientists have observed,” Treutlein said.
The experiment raises many questions for future study. For instance, what functions are performed by the predominant isoforms versus the rare variants; how does the inclusion or exclusion of components affect that isoform and the synapse in which it works?
“This experiment was like a flight over the terrain,” Gokce said. “Now we have to go down and look at the details.”

Researchers survey protein family that helps the brain form synapses

Neuroscientists and bioengineers at Stanford are working together to solve a mystery: How does nature construct the different types of synapses that connect neurons – the brain cells that monitor nerve impulses, control muscles and form thoughts.

In a paper published in the Proceedings of the National Academy of Sciences, Thomas C. Südhof, M.D., a professor of molecular and cellular physiology, and Stephen R. Quake, a professor of bioengineering, describe the diversity of the neurexin family of proteins.

Neurexins help to create the synapses that connect neurons. Think of synapses as switchboards or control panels that connect specific neurons when these brain cells must work together to perform a given task.

Neurexins play a key role in the formation and functioning of synaptic connections. Past human genetics studies have linked neurexins to a variety of cognitive disorders, such as autism and schizophrenia.

Südhof, the Avram Goldstein Professor in the School of Medicine and a winner of the 2013 Nobel Prize in Medicine, has spent years studying the many different forms, or isoforms, of neurexin proteins. He has postulated that different isoforms of neurexins may help to create different types of synaptic connections with distinct properties and functions, and thus enable neurons to do so many complex tasks.

But Südhof had no way to know exactly how many isoforms of neurexins existed until he sat down last year with Quake, the Lee Otterson Professor in the School of Engineering. Quake has pioneered new ways to sequence DNA – the master blueprint that nature follows when making proteins.

The study being published in PNAS represents the results of a year-long collaboration between neuroscientists and bioengineers to better understand how different neurexin proteins affect the behavior of synapses and, ultimately, normal brain functions and neurological conditions such as autism.

Though this will not be the last word on the subject, the findings help illuminate how the brain works and improve our understanding of neurological disorders.

Inside cells, a molecular machine unzips a double-stranded DNA molecule to create an RNA molecule. The RNA molecule is a copy of all the genetic instructions encoded into the DNA. But only specific regions of this RNA molecule contain instructions for making a specific protein. The cell has ways to remove the unnecessary regions and splice the protein-coding regions into a shorter RNA molecule called messenger RNA or mRNA. Thus, each mRNA contains the full instructions for making a specific protein.

To begin this experiment, Ozgun Gokce, a postdoctoral scholar in molecular and cellular physiology in Südhof’s lab, and Barbara Treutlein, a postdoctoral scholar in Quake’s lab, extracted brain cells from the prefrontal cortex of a mouse, then isolated the RNA contained in this tissue.

From this large pool of RNAs they then identified the mRNAs for neurexins. They ran those messenger molecules through equipment designed to read the entire long sequence of chemical instructions for making a specific isoform in the neurexin family of protein.

Quake’s lab is adept at using new instruments that allow researchers to read the long sequence of chemicals in an mRNA strand, allowing them to ascertain exactly what directions this messenger is carrying to the cell’s protein-making machinery.

“This experiment couldn’t have been done even a few years ago,” Treutlein explained.

The mRNAs for neurexins are very long chains of nucleotides – the chemicals that encode genetic information. Only recently have instruments been capable of reading the exact sequence of such long nucleotide chains.

The ability to read the entire sequence of each mRNA was essential because neurexins have 25 constituent parts. But not all of these parts are used each time neurons produce a copy of the protein. Isoforms of neurexin have different combinations of these 25 possible parts. This experiment was designed to discover how many isoforms of neurexin existed and how prevalent each of these isoforms was.

The researchers analyzed more than 25,000 full-length neurexin mRNAs. They found 450 variants. Each variant omitted one or more of the 25 possible components. Most of these isoforms occurred infrequently. A handful accounted for the predominant isoforms.

Although the Stanford scientists sequenced 25,000 mRNAs to discover 450 variants, they believe that if they were to sequence even more mRNAs they would discover more isoforms – their estimate is that at least 2,500 isoforms of the neurexin family exist.

“The fact that we see so many isoforms supports the theory that these protein variants contribute to the huge diversity of synaptic connections that neuroscientists have observed,” Treutlein said.

The experiment raises many questions for future study. For instance, what functions are performed by the predominant isoforms versus the rare variants; how does the inclusion or exclusion of components affect that isoform and the synapse in which it works?

“This experiment was like a flight over the terrain,” Gokce said. “Now we have to go down and look at the details.”

Filed under neurexins synapses synaptic connections neurological disorders neuroscience science

162 notes

How age opens the gates for Alzheimer’s
With advancing age, highly-evolved brain circuits become susceptible to molecular changes that can lead to neurofibrillary tangles — a hallmark of Alzheimer’s Disease, Yale researchers report the week of March 17 in the Proceedings of the National Academy of Sciences.
The findings not only help to explain why age is such a large risk factor for Alzheimer’s, but why the higher brain circuits regulating cognition are so vulnerable to degeneration while the sensory cortex remains unaffected.
“We hope that understanding the key molecular alterations that occur with advancing age can provide new strategies for disease prevention,” said Amy F.T. Arnsten, professor of neurobiology and one of the senior authors of the study.
Neurofibrillary tangles are made from a protein called tau, which becomes sticky and clumps together when modified in a process called phosphorylation. The Yale study found that phosphorylated tau collects in neurons in higher brain circuits of the aging primate brain, but does not accumulate in neurons of the sensory cortex. Phosphorylated tau collects in and near the excitatory connections called synapses where neurons communicate and can spread between cells in higher brain circuits, the study found.
The study led by Yale researchers Becky C. Carlyle, Angus Nairn, Arnsten and Constantinos D. Paspalas found clues about what causes tau to become phosphorylated with advancing age. They uncovered age-related changes in the molecular signals that control the strength of higher cortical connections. In young brains, an enzyme called phosphodiesterase PDE4A sits near the synapse where it inhibits a chemical “vicious cycle” that disconnects higher brain circuits when we are in danger, switching control of behavior to more primitive brain areas. They further found that PDE4A is lost in the aged prefrontal association cortex, unleashing a chemical cascade of events that increase the phosphorylation of tau. This process may be amplified in humans, where high order cortical neurons have even more excitatory connections, leading to tangle formation and ultimately cell death.
“This insight into one pathway by which tau may influence the onset and progression of Alzheimer’s disease takes us a step closer to unraveling this complex and devastating disorder,” said Dr. Molly Wagster, of the National Institutes of Health, a co-funder of the research.
The new study may also help to explain why head injury is a risk factor for Alzheimer’s, as it may also increase the activity of the chemical  “vicious cycle.”
“Now that we begin to see what makes neurons vulnerable, we may be able to protect cells with treatments that mimic the protective effects of PDE4A,” said Arnsten.

How age opens the gates for Alzheimer’s

With advancing age, highly-evolved brain circuits become susceptible to molecular changes that can lead to neurofibrillary tangles — a hallmark of Alzheimer’s Disease, Yale researchers report the week of March 17 in the Proceedings of the National Academy of Sciences.

The findings not only help to explain why age is such a large risk factor for Alzheimer’s, but why the higher brain circuits regulating cognition are so vulnerable to degeneration while the sensory cortex remains unaffected.

“We hope that understanding the key molecular alterations that occur with advancing age can provide new strategies for disease prevention,” said Amy F.T. Arnsten, professor of neurobiology and one of the senior authors of the study.

Neurofibrillary tangles are made from a protein called tau, which becomes sticky and clumps together when modified in a process called phosphorylation. The Yale study found that phosphorylated tau collects in neurons in higher brain circuits of the aging primate brain, but does not accumulate in neurons of the sensory cortex. Phosphorylated tau collects in and near the excitatory connections called synapses where neurons communicate and can spread between cells in higher brain circuits, the study found.

The study led by Yale researchers Becky C. Carlyle, Angus Nairn, Arnsten and Constantinos D. Paspalas found clues about what causes tau to become phosphorylated with advancing age. They uncovered age-related changes in the molecular signals that control the strength of higher cortical connections. In young brains, an enzyme called phosphodiesterase PDE4A sits near the synapse where it inhibits a chemical “vicious cycle” that disconnects higher brain circuits when we are in danger, switching control of behavior to more primitive brain areas. They further found that PDE4A is lost in the aged prefrontal association cortex, unleashing a chemical cascade of events that increase the phosphorylation of tau. This process may be amplified in humans, where high order cortical neurons have even more excitatory connections, leading to tangle formation and ultimately cell death.

“This insight into one pathway by which tau may influence the onset and progression of Alzheimer’s disease takes us a step closer to unraveling this complex and devastating disorder,” said Dr. Molly Wagster, of the National Institutes of Health, a co-funder of the research.

The new study may also help to explain why head injury is a risk factor for Alzheimer’s, as it may also increase the activity of the chemical  “vicious cycle.”

“Now that we begin to see what makes neurons vulnerable, we may be able to protect cells with treatments that mimic the protective effects of PDE4A,” said Arnsten.

Filed under aging alzheimer's disease neurodegeneration neurofibrillary tangles neuroscience science

189 notes

Halting Immune Response Could Save Brain Cells After Stroke

A new study in animals shows that using a compound to block the body’s immune response greatly reduces disability after a stroke.

image

The study by scientists from the University of Wisconsin School of Medicine and Public Health also showed that particular immune cells – CD4+ T-cells produce a mediator, called interleukin (IL)-21 that can cause further damage in stroke tissue.

Moreover, normal mice, ordinarily killed or disabled by an ischemic stroke, were given a shot of a compound that blocks the action of IL-21. Brain scans and brain sections showed that the treated mice suffered little or no stroke damage.   

“This is very exciting because we haven’t had a new drug for stroke in decades, and this suggests a target for such a drug,” says lead author Dr. Zsuzsanna Fabry, professor of pathology and laboratory medicine

Stroke is the fourth-leading killer in the world and an important cause of permanent disability. In an ischemic stroke, a clot blocks the flow of oxygen-rich blood to the brain. But Fabry explains that much of the damage to brain cells occurs after the clot is removed or dissolved by medicine. Blood rushes back into the brain tissue, bringing with it immune cells called T-cells, which flock to the source of an injury.

The study shows that after a stroke, the injured brain cells provoke the CD4+ T-cells to produce a substance, IL-21, that kills the neurons in the blood-deprived tissue of the brain. The study gave new insight how stroke induces neural injury.

Similar Findings in Humans

Fabry’s co-author Dr. Matyas Sandor, professor of pathology and laboratory medicine, says that the final part of the study looked at brain tissue from people who had died following ischemic strokes. It found that CD4+ T-cells and their protein, IL-21 are in high concentration in areas of the brain damaged by the stroke.

Sandor says the similarity suggests that the protein that blocks IL-21 could become a treatment for stroke, and would likely be administered at the same time as the current blood-clot dissolving drugs.

“We don’t have proof that it will work in humans,” he says, “but similar accumulation of IL-21 producing cells suggests that it might.”

The paper was published this week in the Journal of Experimental Medicine.

(Source: med.wisc.edu)

Filed under immune cells interleukin IL-21 stroke brain cells brain tissue medicine neuroscience science

131 notes

Children’s preferences for sweeter and saltier tastes are linked to each other
Scientists from the Monell Chemical Senses Center have found that children who most prefer high levels of sweet tastes also most prefer high levels of salt taste and that, in general, children prefer sweeter and saltier tastes than do adults. These preferences relate not only to food intake but also to measures of growth and can have important implications for efforts to change children’s diets.
Many illnesses of modern society are related to poor food choices. Because children consume far more sugar and salt than recommended, which contributes to poor health, understanding the biology behind children’s preferences for these tastes is a crucial first step to reducing their intake.
"Our research shows that the liking of salty and sweet tastes reflects in part the biology of the child," said study lead author Julie Mennella, PhD, a biopsychologist at Monell. Biology predisposes us to like and consume calorie-rich sweet foods and sodium-rich salty foods, and this is especially true for children. "Growing children’s heightened preferences for sweet and salty tastes make them more vulnerable to the modern diet, which differs from the diet of our past, when salt and sugars were once rare and expensive commodities."
In the study, published online at PLOS ONE, Mennella and colleagues tested 108 children between 5 and 10 years old, and their mothers, for salt and sweet taste preferences. The same testing method was used for both children and their mothers, who tasted broth and crackers that varied in salt content, and sugar water and jellies that varied in sugar content. The method, developed by Mennella and her colleagues at Monell, scientifically determines taste preferences, even for very young children, by having them compare two different levels of a taste, pick their favorite, and then compare that favorite with another, over and again until the most favorite is identified.
Mennella and colleagues also had mothers and children list foods and beverages they consumed in the past 24 hours, from which daily sodium, calorie, and added sugar intakes were estimated. Subjects then gave a saliva sample, which was genotyped for a sweet receptor gene, and a urine sample to measure levels of Ntx, a marker for bone growth. Weight, height, and percent body fat were measured for all subjects.
Analyses of all these data showed that not only were sweet and salty preferences correlated in children, and higher overall than those in adults, but also children’s taste preferences related to measures of growth and development: children who were tall for their age preferred sweeter solutions, and children with higher amounts of body fat preferred saltier soups. There was also some indication that higher sweet liking related to spurts in bone growth, but that result needs confirmation in a larger group of children.
Sweet and salty preferences were correlated in adults as well. And in adults, but not in children, sweet receptor genotype was related to the most preferred level of sweetness. “There are inborn genetic differences that affect the liking for sweet by adults,” says collaborator Danielle Reed, PhD, “but for children, other factors – perhaps the current state of growth – are stronger influences than genetics.”
Both children and adults who preferred higher levels of salt in food also reported consuming more dietary salt in the past 24 hours, but no such relationship was found between sweet preferences and sugar intake. This difference may reflect parents exerting greater control in their children’s diet for added sugar than for added salt. Or it could reflect increased use of non-nutritive sweeteners in foods geared for children – in other words, the sweetness of some foods doesn’t reflect their sugar content.
Current intakes of sodium and added sugars among US children are well in excess of recommendations. For almost all 2- to 8-year-olds, added sugars account for more than half of their discretionary calories (130 total discretionary calories are allowed for children of this age). For 4- to 13-year-olds, sodium intake is more than twice adequate levels (1200-1500 mg/day is allowed for children of this age). The children studied by Mennella and colleagues, two-thirds of whom were overweight or obese, also consumed twice adequate levels of sodium, and their added sugar intake averaged almost 20 teaspoons, or 300 calories, each day.
Guidelines from leading authorities, including the World Health Organization, American Heart Association, U.S. Department of Agriculture, and Institute of Medicine, recommend significantly cutting sugar and salt intake for children, but this can be a daunting task. Commenting on the implications of her research, lead author Mennella noted, “The present findings reveal that the struggle parents have in modifying their children’s diets to comply with recommendations appears to have a biological basis.”
Understanding the basic biology that drives the desire for sweet and salty tastes in children illustrates their vulnerability to the current food environment. But on a positive note, Mennella observed, “it also paves the way toward developing more insightful and informed strategies for promoting healthy eating that meet the particular needs of growing children.”

Children’s preferences for sweeter and saltier tastes are linked to each other

Scientists from the Monell Chemical Senses Center have found that children who most prefer high levels of sweet tastes also most prefer high levels of salt taste and that, in general, children prefer sweeter and saltier tastes than do adults. These preferences relate not only to food intake but also to measures of growth and can have important implications for efforts to change children’s diets.

Many illnesses of modern society are related to poor food choices. Because children consume far more sugar and salt than recommended, which contributes to poor health, understanding the biology behind children’s preferences for these tastes is a crucial first step to reducing their intake.

"Our research shows that the liking of salty and sweet tastes reflects in part the biology of the child," said study lead author Julie Mennella, PhD, a biopsychologist at Monell. Biology predisposes us to like and consume calorie-rich sweet foods and sodium-rich salty foods, and this is especially true for children. "Growing children’s heightened preferences for sweet and salty tastes make them more vulnerable to the modern diet, which differs from the diet of our past, when salt and sugars were once rare and expensive commodities."

In the study, published online at PLOS ONE, Mennella and colleagues tested 108 children between 5 and 10 years old, and their mothers, for salt and sweet taste preferences. The same testing method was used for both children and their mothers, who tasted broth and crackers that varied in salt content, and sugar water and jellies that varied in sugar content. The method, developed by Mennella and her colleagues at Monell, scientifically determines taste preferences, even for very young children, by having them compare two different levels of a taste, pick their favorite, and then compare that favorite with another, over and again until the most favorite is identified.

Mennella and colleagues also had mothers and children list foods and beverages they consumed in the past 24 hours, from which daily sodium, calorie, and added sugar intakes were estimated. Subjects then gave a saliva sample, which was genotyped for a sweet receptor gene, and a urine sample to measure levels of Ntx, a marker for bone growth. Weight, height, and percent body fat were measured for all subjects.

Analyses of all these data showed that not only were sweet and salty preferences correlated in children, and higher overall than those in adults, but also children’s taste preferences related to measures of growth and development: children who were tall for their age preferred sweeter solutions, and children with higher amounts of body fat preferred saltier soups. There was also some indication that higher sweet liking related to spurts in bone growth, but that result needs confirmation in a larger group of children.

Sweet and salty preferences were correlated in adults as well. And in adults, but not in children, sweet receptor genotype was related to the most preferred level of sweetness. “There are inborn genetic differences that affect the liking for sweet by adults,” says collaborator Danielle Reed, PhD, “but for children, other factors – perhaps the current state of growth – are stronger influences than genetics.”

Both children and adults who preferred higher levels of salt in food also reported consuming more dietary salt in the past 24 hours, but no such relationship was found between sweet preferences and sugar intake. This difference may reflect parents exerting greater control in their children’s diet for added sugar than for added salt. Or it could reflect increased use of non-nutritive sweeteners in foods geared for children – in other words, the sweetness of some foods doesn’t reflect their sugar content.

Current intakes of sodium and added sugars among US children are well in excess of recommendations. For almost all 2- to 8-year-olds, added sugars account for more than half of their discretionary calories (130 total discretionary calories are allowed for children of this age). For 4- to 13-year-olds, sodium intake is more than twice adequate levels (1200-1500 mg/day is allowed for children of this age). The children studied by Mennella and colleagues, two-thirds of whom were overweight or obese, also consumed twice adequate levels of sodium, and their added sugar intake averaged almost 20 teaspoons, or 300 calories, each day.

Guidelines from leading authorities, including the World Health Organization, American Heart Association, U.S. Department of Agriculture, and Institute of Medicine, recommend significantly cutting sugar and salt intake for children, but this can be a daunting task. Commenting on the implications of her research, lead author Mennella noted, “The present findings reveal that the struggle parents have in modifying their children’s diets to comply with recommendations appears to have a biological basis.”

Understanding the basic biology that drives the desire for sweet and salty tastes in children illustrates their vulnerability to the current food environment. But on a positive note, Mennella observed, “it also paves the way toward developing more insightful and informed strategies for promoting healthy eating that meet the particular needs of growing children.”

Filed under children diet tastee taste preferences sweet salty health neuroscience science

158 notes

Scientists slow development of Alzheimer’s trademark cell-killing plaques 
University of Michigan researchers have learned how to fix a cellular structure called the Golgi that mysteriously becomes fragmented in all Alzheimer’s patients and appears to be a major cause of the disease.
They say that understanding this mechanism helps decode amyloid plaque formation in the brains of Alzheimer’s patients—plaques that kills cells and contributes to memory loss and other Alzheimer’s symptoms.
The researchers discovered the molecular process behind Golgi fragmentation, and also developed two techniques to ‘rescue’ the Golgi structure.
"We plan to use this as a strategy to delay the disease development," said Yanzhuang Wang, U-M associate professor of molecular, cellular and developmental biology. "We have a better understanding of why plaque forms fast in Alzheimer’s and found a way to slow down plaque formation."
The paper appears in an upcoming edition of the Proceedings of the National Academy of Sciences. Gunjan Joshi, a research fellow in Wang’s lab, is the lead author.
Wang said scientists have long recognized that the Golgi becomes fragmented in the neurons of Alzheimer’s patients, but until now they didn’t know how or why this fragmentation occurred.
The Golgi structure has the important role of sending molecules to the right places in order to make functional cells, Wang said. The Golgi is analogous to a post office of the cell, and when the Golgi becomes fragmented, it’s like a post office gone haywire, sending packages to the wrong places or not sending them at all.
U-M researchers found that the accumulation of the Abeta peptide—the primary culprit in forming plaques that kill cells in Alzheimer’s brains—triggers Golgi fragmentation by activating an enzyme called cdk5 that modifies Golgi structural proteins such as GRASP65.
Wang and colleagues rescued the Golgi structure in two ways: they either inhibited cdk5 or expressed a mutant of GRASP65 that cannot be modified by cdk5. Both rescue measures decreased the harmful Abeta secretion by about 80 percent.
The next step is to see if Golgi fragmentation can be delayed or reversed in mice, Wang said. This involves a collaboration with the Michigan Alzheimer’s Disease Center at the U-M Health System, directed by Dr. Henry Paulson, professor of neurology, and Geoffrey Murphy, assistant professor of physiology and research professor at the U-M Molecular and Behavioral Neuroscience Institute.

Scientists slow development of Alzheimer’s trademark cell-killing plaques

University of Michigan researchers have learned how to fix a cellular structure called the Golgi that mysteriously becomes fragmented in all Alzheimer’s patients and appears to be a major cause of the disease.

They say that understanding this mechanism helps decode amyloid plaque formation in the brains of Alzheimer’s patients—plaques that kills cells and contributes to memory loss and other Alzheimer’s symptoms.

The researchers discovered the molecular process behind Golgi fragmentation, and also developed two techniques to ‘rescue’ the Golgi structure.

"We plan to use this as a strategy to delay the disease development," said Yanzhuang Wang, U-M associate professor of molecular, cellular and developmental biology. "We have a better understanding of why plaque forms fast in Alzheimer’s and found a way to slow down plaque formation."

The paper appears in an upcoming edition of the Proceedings of the National Academy of Sciences. Gunjan Joshi, a research fellow in Wang’s lab, is the lead author.

Wang said scientists have long recognized that the Golgi becomes fragmented in the neurons of Alzheimer’s patients, but until now they didn’t know how or why this fragmentation occurred.

The Golgi structure has the important role of sending molecules to the right places in order to make functional cells, Wang said. The Golgi is analogous to a post office of the cell, and when the Golgi becomes fragmented, it’s like a post office gone haywire, sending packages to the wrong places or not sending them at all.

U-M researchers found that the accumulation of the Abeta peptide—the primary culprit in forming plaques that kill cells in Alzheimer’s brains—triggers Golgi fragmentation by activating an enzyme called cdk5 that modifies Golgi structural proteins such as GRASP65.

Wang and colleagues rescued the Golgi structure in two ways: they either inhibited cdk5 or expressed a mutant of GRASP65 that cannot be modified by cdk5. Both rescue measures decreased the harmful Abeta secretion by about 80 percent.

The next step is to see if Golgi fragmentation can be delayed or reversed in mice, Wang said. This involves a collaboration with the Michigan Alzheimer’s Disease Center at the U-M Health System, directed by Dr. Henry Paulson, professor of neurology, and Geoffrey Murphy, assistant professor of physiology and research professor at the U-M Molecular and Behavioral Neuroscience Institute.

Filed under alzheimer's disease amyloid plaque Golgi fragmentation peptides neuroscience science

199 notes

Childhood’s end: ADHD, autism and schizophrenia tied to stronger inhibitory interactions in adolescent prefrontal cortex
Key cognitive functions such as working memory (which combines temporary storage and manipulation of information) and executive function (a set of mental processes that helps connect past experience with present action) are associated with the brain’s prefrontal cortex. Unlike other brain regions, the prefrontal cortex does not mature until early adulthood, with the most pronounced changes being seen between its peripubertal (onset of puberty) and postpubertal developmental states. Moreover, this maturation period is correlated with cognitive maturation – but the physical neuronal changes during this transition have remained for the most part unknown. Recently, however, scientists at the Wake Forest School of Medicine in Winston-Salem, NC recorded and compared prefrontal cortical activity peripubertal and adult monkeys.
The researchers found that compared with adults, peripubertal monkeys showed lower connectivity due to stronger inhibitory interactions, suggesting that intrinsic (or resting state) inhibitory connections – that is, inhibitory neural connections that are active in the absence of any particular task – decline with maturation. The scientists then concluded that prefrontal intrinsic connectivity changes are a possible substrate for cognitive maturation.
Prof. Christos Constantinidis discusses the paper that he, Dr. Xin Zhou and their co-authors published in Proceedings of the National Academy of Sciences. When comparing the functional connectivity between pairs of neurons in neuronal activity recorded from the prefrontal cortex of peripubertal and adult monkeys and evaluating the developmental stage of peripubertal rhesus monkeys with a series of morphometric, hormonal, and radiographic measures, Constantinidis tells Medical Xpress that a major challenge was to obtain neural activity from the brain of monkeys around the time of puberty. “We needed to make ourselves experts in the developmental trajectories of monkeys and conduct experiments just at the right time relative to the onset of puberty,” he explains.
Read more

Childhood’s end: ADHD, autism and schizophrenia tied to stronger inhibitory interactions in adolescent prefrontal cortex

Key cognitive functions such as working memory (which combines temporary storage and manipulation of information) and executive function (a set of mental processes that helps connect past experience with present action) are associated with the brain’s prefrontal cortex. Unlike other brain regions, the prefrontal cortex does not mature until early adulthood, with the most pronounced changes being seen between its peripubertal (onset of puberty) and postpubertal developmental states. Moreover, this maturation period is correlated with cognitive maturation – but the physical neuronal changes during this transition have remained for the most part unknown. Recently, however, scientists at the Wake Forest School of Medicine in Winston-Salem, NC recorded and compared prefrontal cortical activity peripubertal and adult monkeys.

The researchers found that compared with adults, peripubertal monkeys showed lower connectivity due to stronger inhibitory interactions, suggesting that intrinsic (or resting state) inhibitory connections – that is, inhibitory neural connections that are active in the absence of any particular task – decline with maturation. The scientists then concluded that prefrontal intrinsic connectivity changes are a possible substrate for cognitive maturation.

Prof. Christos Constantinidis discusses the paper that he, Dr. Xin Zhou and their co-authors published in Proceedings of the National Academy of Sciences. When comparing the functional connectivity between pairs of neurons in neuronal activity recorded from the prefrontal cortex of peripubertal and adult monkeys and evaluating the developmental stage of peripubertal rhesus monkeys with a series of morphometric, hormonal, and radiographic measures, Constantinidis tells Medical Xpress that a major challenge was to obtain neural activity from the brain of monkeys around the time of puberty. “We needed to make ourselves experts in the developmental trajectories of monkeys and conduct experiments just at the right time relative to the onset of puberty,” he explains.

Read more

Filed under prefrontal cortex primates puberty neural activity neurons ADHD schizophrenia autism neuroscience science

88 notes

Stumbling Fruit Flies Lead Scientists to Discover Gene Essential for Sensing Joint Position
Scientists at The Scripps Research Institute (TSRI) have discovered an important mechanism underlying sensory feedback that guides balance and limb movements.
The finding, which the TSRI team uncovered in fruit flies, centers on a gene and a type of nerve cell required for detection of leg-joint angles. “These cells resemble human nerve cells that innervate joints,” said team leader Professor Boaz Cook, who is an assistant professor at TSRI, “and they encode joint-angle information in the same way.”
If the findings can be fully replicated in humans, they could lead to a better understanding of, as well as treatments for, disorders arising from faulty proprioception, the detection of body position.
A report of the findings appears in the March 14, 2014 issue of the journal Science.
A Mystery of Sensation
The proprioceptive sense of how the limbs are positioned is what enables a person, even with eyes closed, to touch the tip of the nose with the tip of a finger—an ability easily impaired by alcohol, which is why traffic police often test suspected drunk drivers this way.
Scientists have known that proprioceptive signals originate from so-called mechanosensory neurons, whose nerve ends are embedded in muscles, skin and other tissues. The stretching or compression of these tissues opens ion channels in the nerve membrane, which results in a signal to the brain.
What hasn’t been clear is how such a neuron can specialize in sensing just one type of membrane-distorting stimulus—such as the angle of a limb joint—yet exclude others, such as impact pressures.
In the new study, Cook and two members of his laboratory, first author Bela S. Desai, a postdoctoral fellow, and graduate student Abhishek Chadha, sought to shed some light on this mystery with a study of Drosophila fruit flies. Quickly maturing and easily studied, Drosophila often are analyzed for clues to the genetic underpinnings of basic animal behaviors.
Following the Trail
Cook and his colleagues began with a special collection of Drosophila containing a variety of uncatalogued mutations. The scientists sifted through the collection looking for mutant flies with walking impairments and soon zeroed in on several impaired walkers that turned out to have mutations in the same gene.
The scientists named the gene stumble (stum for short) for the abnormality caused by its absence.
Using a fluorescent tracer, they then localized the expression of stum in normal flies to neurons that lay close to the three main leg joints. Each neuron’s input-sensing tendril (dendrite) grew right up to the joints—a sign that its evolved function is to detect joint angle.
The researchers also found that the protein specified by the stum gene normally migrates to the tip of each dendrite. With high-resolution microscopy, they imaged each of these tips and observed an extra length branching more or less sideways at the joint.
At ordinary, at-rest joint angles, the relative positions of the main dendrite tip and its side branch stayed more or less the same; however, at extreme joint angles, the pair stretched out. As they did, the level of calcium ions in the neuron rose sharply, suggesting that ion channels had opened and the neuron was becoming active.
Cook noted the results show how a seemingly general mechanosensory, membrane-stretch-sensitive neuron can evolve a specificity for a particular type of proprioceptive signal. “It’s a nice example of how you can create that specificity from something that only stretches mechanically,” he said.
The team is now trying to nail down the specific role of stum proteins in Drosophila and to determine whether the human version of stum—which has never been characterized—also works in joint angle sensing. Some sensory role for the human version of stum is likely, as the stum gene has been remarkably well conserved throughout animal evolution. Cook and his colleagues were even able to restore some normal walking ability to stum-mutant flies by adding the mouse version of the stum gene. “Stum is probably doing the same thing in all animals,” he said.

Stumbling Fruit Flies Lead Scientists to Discover Gene Essential for Sensing Joint Position

Scientists at The Scripps Research Institute (TSRI) have discovered an important mechanism underlying sensory feedback that guides balance and limb movements.

The finding, which the TSRI team uncovered in fruit flies, centers on a gene and a type of nerve cell required for detection of leg-joint angles. “These cells resemble human nerve cells that innervate joints,” said team leader Professor Boaz Cook, who is an assistant professor at TSRI, “and they encode joint-angle information in the same way.”

If the findings can be fully replicated in humans, they could lead to a better understanding of, as well as treatments for, disorders arising from faulty proprioception, the detection of body position.

A report of the findings appears in the March 14, 2014 issue of the journal Science.

A Mystery of Sensation

The proprioceptive sense of how the limbs are positioned is what enables a person, even with eyes closed, to touch the tip of the nose with the tip of a finger—an ability easily impaired by alcohol, which is why traffic police often test suspected drunk drivers this way.

Scientists have known that proprioceptive signals originate from so-called mechanosensory neurons, whose nerve ends are embedded in muscles, skin and other tissues. The stretching or compression of these tissues opens ion channels in the nerve membrane, which results in a signal to the brain.

What hasn’t been clear is how such a neuron can specialize in sensing just one type of membrane-distorting stimulus—such as the angle of a limb joint—yet exclude others, such as impact pressures.

In the new study, Cook and two members of his laboratory, first author Bela S. Desai, a postdoctoral fellow, and graduate student Abhishek Chadha, sought to shed some light on this mystery with a study of Drosophila fruit flies. Quickly maturing and easily studied, Drosophila often are analyzed for clues to the genetic underpinnings of basic animal behaviors.

Following the Trail

Cook and his colleagues began with a special collection of Drosophila containing a variety of uncatalogued mutations. The scientists sifted through the collection looking for mutant flies with walking impairments and soon zeroed in on several impaired walkers that turned out to have mutations in the same gene.

The scientists named the gene stumble (stum for short) for the abnormality caused by its absence.

Using a fluorescent tracer, they then localized the expression of stum in normal flies to neurons that lay close to the three main leg joints. Each neuron’s input-sensing tendril (dendrite) grew right up to the joints—a sign that its evolved function is to detect joint angle.

The researchers also found that the protein specified by the stum gene normally migrates to the tip of each dendrite. With high-resolution microscopy, they imaged each of these tips and observed an extra length branching more or less sideways at the joint.

At ordinary, at-rest joint angles, the relative positions of the main dendrite tip and its side branch stayed more or less the same; however, at extreme joint angles, the pair stretched out. As they did, the level of calcium ions in the neuron rose sharply, suggesting that ion channels had opened and the neuron was becoming active.

Cook noted the results show how a seemingly general mechanosensory, membrane-stretch-sensitive neuron can evolve a specificity for a particular type of proprioceptive signal. “It’s a nice example of how you can create that specificity from something that only stretches mechanically,” he said.

The team is now trying to nail down the specific role of stum proteins in Drosophila and to determine whether the human version of stum—which has never been characterized—also works in joint angle sensing. Some sensory role for the human version of stum is likely, as the stum gene has been remarkably well conserved throughout animal evolution. Cook and his colleagues were even able to restore some normal walking ability to stum-mutant flies by adding the mouse version of the stum gene. “Stum is probably doing the same thing in all animals,” he said.

Filed under fruit flies mechanosensory neurons nerve cells joint stum gene neuroscience science

118 notes

Researchers Identify Gene That Helps Fruit Flies Go to Sleep

A novel protein may explain how biological clocks regulate human sleep cycles

image

In a series of experiments sparked by fruit flies that couldn’t sleep, Johns Hopkins researchers say they have identified a mutant gene — dubbed “Wide Awake” — that sabotages how the biological clock sets the timing for sleep. The finding also led them to the protein made by a normal copy of the gene that promotes sleep early in the night and properly regulates sleep cycles.

Because genes and the proteins they code for are often highly conserved across species, the researchers suspect their discoveries — boosted by preliminary studies in mice — could lead to new treatments for people whose insomnia or off-hours work schedules keep them awake long after their heads hit the pillow.

“We know that the timing of sleep is regulated by the body’s internal biological clock, but just how this occurs has been a mystery,” says study leader Mark N. Wu, M.D., Ph.D., an assistant professor of neurology, medicine, genetic medicine and neuroscience at the Johns Hopkins University School of Medicine. “We have now found the first protein ever identified that translates timing information from the body’s circadian clock and uses it to regulate sleep.”

A report on the work was published online March 13 in the journal Neuron.

In their hunt for the molecular roots of sleep regulation, Wu and his colleagues studied thousands of fruit fly colonies, each with a different set of genetic mutations, and analyzed their sleep patterns. They found that one group of flies, with a mutation in the gene they would later call Wide Awake (or Wake for short), had trouble falling asleep at night, a malady that looked a lot like sleep-onset insomnia in humans. The investigators say Wake appears to be the messenger from the circadian clock to the brain, telling it that it’s time to shut down and sleep.

After isolating the gene, Wu’s team determined that when working properly, Wake helps shut down clock neurons of the brain that control arousal by making them more responsive to signals from the inhibitory neurotransmitter called GABA. Wake does this specifically in the early evening, thus promoting sleep at the right time. Levels of Wake cycle during the day, peaking near dusk in good sleepers.

Flies with a mutated Wake gene that couldn’t get to sleep were not getting enough GABA signal to quiet their arousal circuits at night, keeping the flies agitated.

The researchers found the same gene in every animal they studied: humans, mice, rabbits, chickens, even worms.

Importantly, when Wu’s team looked to see where Wake was located in the mouse brain, they found that it was expressed in the suprachiasmatic nucleus (SCN), the master clock in mammals. Wu says the fact that the Wake protein was expressed in high concentrations in the SCN of mice is significant.

“Sometimes we discover things in flies that have no direct relevance in higher order animals,” Wu says. “In this case, because we found the protein in a location where it likely plays a role in circadian rhythms and sleep, we are encouraged that this protein may do the same thing in mice and people.”

The hope is that someday, by manipulating Wake, possibly with a medication, shift workers, military personnel and sleep-onset insomniacs could sleep better.

“This novel pathway may be a place where we can intervene,” Wu says.

(Source: hopkinsmedicine.org)

Filed under sleep fruit flies circadian rhythms wide awake suprachiasmatic nucleus neuroscience science

free counters