Neuroscience

Articles and news from the latest research reports.

530 notes

In a cloning first, scientists create stem cells from adults
Scientists have moved a step closer to the goal of creating stem cells perfectly matched to a patient’s DNA in order to treat diseases, they announced on Thursday, creating patient-specific cell lines out of the skin cells of two adult men. 
The advance, described online in the journal Cell Stem Cell, is the first time researchers have achieved “therapeutic cloning” of adults. Technically called somatic-cell nuclear transfer, therapeutic cloning means producing embryonic cells genetically identical to a donor, usually for the purpose of using those cells to treat disease.
Read more

In a cloning first, scientists create stem cells from adults

Scientists have moved a step closer to the goal of creating stem cells perfectly matched to a patient’s DNA in order to treat diseases, they announced on Thursday, creating patient-specific cell lines out of the skin cells of two adult men.

The advance, described online in the journal Cell Stem Cell, is the first time researchers have achieved “therapeutic cloning” of adults. Technically called somatic-cell nuclear transfer, therapeutic cloning means producing embryonic cells genetically identical to a donor, usually for the purpose of using those cells to treat disease.

Read more

Filed under stem cells somatic cell nuclear transfer iPSCs regenerative medicine medicine health science

173 notes

Is Parkinson’s an Autoimmune Disease?

The cause of neuronal death in Parkinson’s disease is still unknown, but a new study proposes that neurons may be mistaken for foreign invaders and killed by the person’s own immune system, similar to the way autoimmune diseases like type I diabetes, celiac disease, and multiple sclerosis attack the body’s cells. The study was published April 16, 2014, in Nature Communications.

image

(Image caption: Four images of a neuron from a human brain show that neurons produce a protein (in red) that can direct an immune attack against the neuron (green). Credit: Carolina Cebrian.)

“This is a new, and likely controversial, idea in Parkinson’s disease; but if true, it could lead to new ways to prevent neuronal death in Parkinson’s that resemble treatments for autoimmune diseases,” said the study’s senior author, David Sulzer, PhD, professor of neurobiology in the departments of psychiatry, neurology, and pharmacology at Columbia University College of Physicians & Surgeons.

The new hypothesis about Parkinson’s emerges from other findings in the study that overturn a deep-seated assumption about neurons and the immune system.

For decades, neurobiologists have thought that neurons are protected from attacks from the immune system, in part, because they do not display antigens on their cell surfaces. Most cells, if infected by virus or bacteria, will display bits of the microbe (antigens) on their outer surface. When the immune system recognizes the foreign antigens, T cells attack and kill the cells. Because scientists thought that neurons did not display antigens, they also thought that the neurons were exempt from T-cell attacks.

“That idea made sense because, except in rare circumstances, our brains cannot make new neurons to replenish ones killed by the immune system,” Dr. Sulzer says. “But, unexpectedly, we found that some types of neurons can display antigens.”

Cells display antigens with special proteins called MHCs. Using postmortem brain tissue donated to the Columbia Brain Bank by healthy donors, Dr. Sulzer and his postdoc Carolina Cebrián, PhD, first noticed—to their surprise—that MHC-1 proteins were present in two types of neurons. These two types of neurons—one of which is dopamine neurons in a brain region called the substantia nigra—degenerate during Parkinson’s disease.

To see if living neurons use MHC-1 to display antigens (and not for some other purpose), Drs. Sulzer and Cebrián conducted in vitro experiments with mouse neurons and human neurons created from embryonic stem cells. The studies showed that under certain circumstances—including conditions known to occur in Parkinson’s—the neurons use MHC-1 to display antigens. Among the different types of neurons tested, the two types affected in Parkinson’s were far more responsive than other neurons to signals that triggered antigen display.

The researchers then confirmed that T cells recognized and attacked neurons displaying specific antigens.

The results raise the possibility that Parkinson’s is partly an autoimmune disease, Dr. Sulzer says, but more research is needed to confirm the idea.

“Right now, we’ve showed that certain neurons display antigens and that T cells can recognize these antigens and kill neurons,” Dr. Sulzer says, “but we still need to determine whether this is actually happening in people. We need to show that there are certain T cells in Parkinson’s patients that can attack their neurons.”

If the immune system does kill neurons in Parkinson’s disease, Dr. Sulzer cautions that it is not the only thing going awry in the disease. “This idea may explain the final step,” he says. “We don’t know if preventing the death of neurons at this point will leave people with sick cells and no change in their symptoms, or not.”

(Source: newsroom.cumc.columbia.edu)

Filed under parkinson's disease autoimmune diseases immune system neurons antigens neuroscience science

121 notes

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes
In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.
Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.
In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.
"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.
"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”
The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”
Vales said the use of language can change how people inspect the world around them.
"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"
Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”
For this reason, language may play an important role in a number of developmental disabilities.
"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.
"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."
How does this matter to child development?
"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."
"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes

In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.

Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.

In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.

"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.

"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”

The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”

Vales said the use of language can change how people inspect the world around them.

"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"

Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”

For this reason, language may play an important role in a number of developmental disabilities.

"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.

"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."

How does this matter to child development?

"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."

"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Filed under language child development neurodevelopmental disorders cognition working memory psychology neuroscience science

200 notes

Our Brains are Hardwired for Language
People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?
A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.
LANGUAGE UNIVERSALS
Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?
THE EXPERIMENT
To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.
Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).
UNIVERSALLY HARDWIRED BRAINS
The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.
While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.
These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.
The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.
FURTHER EVIDENCE
This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.
The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Our Brains are Hardwired for Language

People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?

A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.

LANGUAGE UNIVERSALS

Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?

THE EXPERIMENT

To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.

Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).

UNIVERSALLY HARDWIRED BRAINS

The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.

While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.

These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.

The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.

FURTHER EVIDENCE

This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.

The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Filed under language broca's area brain activity language universals linguistics psychology neuroscience science

123 notes

Neurons in the Brain Tune into Different Frequencies for Different Spatial Memory Tasks
Your brain transmits information about your current location and memories of past locations over the same neural pathways using different frequencies of a rhythmic electrical activity called gamma waves, report neuroscientists at The University of Texas at Austin.
The research, published in the journal Neuron on April 17, may provide insight into the cognitive and memory disruptions seen in diseases such as schizophrenia and Alzheimer’s, in which gamma waves are disturbed.
Previous research has shown that the same brain region is activated whether we’re storing memories of a new place or recalling past places we’ve been.
“Many of us leave our cars in a parking garage on a daily basis. Every morning, we create a memory of where we parked our car, which we retrieve in the evening when we pick it up,” said Laura Colgin, assistant professor of neuroscience and member of the Center for Learning and Memory in The University of Texas at Austin’s College of Natural Sciences. “How then do our brains distinguish between current location and the memory of a location? Our new findings suggest a mechanism for distinguishing these different representations.”
Memory involving location is stored in an area of the brain called the hippocampus. The neurons in the hippocampus that store spatial memories (such as the location where you parked your car) are called place cells. The same set of place cells are activated both when a new memory of a location is stored and, later, when the memory of that location is recalled or retrieved.
When the hippocampus forms a new spatial memory, it receives sensory information about your current location from a brain region called the entorhinal cortex. When the hippocampus recalls a past location, it retrieves the stored spatial memory from a subregion of the hippocampus called CA3.
The entorhinal cortex and CA3 transmit these different types of information using different frequencies of gamma waves. The entorhinal cortex uses fast gamma waves, which have a frequency of about 80 Hz (about the same frequency as a bass E note played on a piano). In contrast, CA3 sends its signals on slow gamma waves, which have a frequency of about 40 Hz.
Colgin and her colleagues hypothesized that fast gamma waves promote encoding of recent experiences, while slow gamma waves support memory retrieval.
They tested these hypotheses by recording gamma waves in the hippocampus, together with electrical signals from place cells, in rats navigating through a simple environment. They found that place cells represented the rat’s current location when cells were active on fast gamma waves. When cells were active on slow gamma waves, place cells represented locations in the direction that the rat was heading.
“These findings suggest that fast gamma waves promote current memory encoding, such as the memory of where we just parked,” said Colgin. “However, when we need to remember where we are going, like when finding our parked car later in the day, the hippocampus tunes into slow gamma waves.”
Because gamma waves are seen in many areas of the brain besides the hippocampus, Colgin’s findings may generalize beyond spatial memory. The ability for neurons to tune into different frequencies of gamma waves provides a way for the brain to traffic different types of information across the same neuronal circuits.
Colgin said one of the next steps in her team’s research will be to apply technologies that induce different types of gamma waves in rats performing memory tasks. She imagines that they will be able to improve new memory encoding by inducing fast gamma waves. Conversely, she expects that inducing slow gamma waves will be detrimental to the encoding of new memories. Those slow gamma waves should trigger old memories, which would interfere with new learning.

Neurons in the Brain Tune into Different Frequencies for Different Spatial Memory Tasks

Your brain transmits information about your current location and memories of past locations over the same neural pathways using different frequencies of a rhythmic electrical activity called gamma waves, report neuroscientists at The University of Texas at Austin.

The research, published in the journal Neuron on April 17, may provide insight into the cognitive and memory disruptions seen in diseases such as schizophrenia and Alzheimer’s, in which gamma waves are disturbed.

Previous research has shown that the same brain region is activated whether we’re storing memories of a new place or recalling past places we’ve been.

“Many of us leave our cars in a parking garage on a daily basis. Every morning, we create a memory of where we parked our car, which we retrieve in the evening when we pick it up,” said Laura Colgin, assistant professor of neuroscience and member of the Center for Learning and Memory in The University of Texas at Austin’s College of Natural Sciences. “How then do our brains distinguish between current location and the memory of a location? Our new findings suggest a mechanism for distinguishing these different representations.”

Memory involving location is stored in an area of the brain called the hippocampus. The neurons in the hippocampus that store spatial memories (such as the location where you parked your car) are called place cells. The same set of place cells are activated both when a new memory of a location is stored and, later, when the memory of that location is recalled or retrieved.

When the hippocampus forms a new spatial memory, it receives sensory information about your current location from a brain region called the entorhinal cortex. When the hippocampus recalls a past location, it retrieves the stored spatial memory from a subregion of the hippocampus called CA3.

The entorhinal cortex and CA3 transmit these different types of information using different frequencies of gamma waves. The entorhinal cortex uses fast gamma waves, which have a frequency of about 80 Hz (about the same frequency as a bass E note played on a piano). In contrast, CA3 sends its signals on slow gamma waves, which have a frequency of about 40 Hz.

Colgin and her colleagues hypothesized that fast gamma waves promote encoding of recent experiences, while slow gamma waves support memory retrieval.

They tested these hypotheses by recording gamma waves in the hippocampus, together with electrical signals from place cells, in rats navigating through a simple environment. They found that place cells represented the rat’s current location when cells were active on fast gamma waves. When cells were active on slow gamma waves, place cells represented locations in the direction that the rat was heading.

“These findings suggest that fast gamma waves promote current memory encoding, such as the memory of where we just parked,” said Colgin. “However, when we need to remember where we are going, like when finding our parked car later in the day, the hippocampus tunes into slow gamma waves.”

Because gamma waves are seen in many areas of the brain besides the hippocampus, Colgin’s findings may generalize beyond spatial memory. The ability for neurons to tune into different frequencies of gamma waves provides a way for the brain to traffic different types of information across the same neuronal circuits.

Colgin said one of the next steps in her team’s research will be to apply technologies that induce different types of gamma waves in rats performing memory tasks. She imagines that they will be able to improve new memory encoding by inducing fast gamma waves. Conversely, she expects that inducing slow gamma waves will be detrimental to the encoding of new memories. Those slow gamma waves should trigger old memories, which would interfere with new learning.

Filed under gamma waves entorhinal cortex hippocampus memory place cells neuroscience science

61 notes

Study Connects Sleep Deficits Among Young Fruitflies to Disruption in Mating Later in Life
Mom always said you need your sleep, and it turns out, she was right. According to a new study published in Science this week from researchers at the Perelman School of Medicine at the University of Pennsylvania, lack of sleep in young fruit flies profoundly diminishes their ability to do one thing they do really, really well – make more flies.
The study, led by Amita Sehgal PhD, professor of Neuroscience and a Howard Hughes Medical Institute (HHMI) Investigator, links sleep disruption in newborn fruit flies with a critical adult behavior: courtship and mating.
The team, addressed sleep in the very youngest of flies. “These flies sleep considerably more than adults and that behavior repeats across the animal kingdom,” Sehgal says. “Infant humans, rats, and flies, they all sleep a lot.”
Co-author Matthew Kayser, MD, PhD, in the Department of Psychiatry and Center for Sleep and Circadian Neurobiology, whose research centers on the link between sleep disruption and human neuropsychiatric diseases, used the fly – which is far more genetically pliant than mammals — to ask two basic questions: Why do young animals sleep so much? And, what is the implication of altering those patterns?
The team used genetically manipulated flies to show that young flies normally produce relatively little dopamine – a wake-promoting neurotransmitter — in certain neural circuits that feed into the sleep-promoting brain region called the dorsal fan-shaped body (dFSB). Premature activation of those circuits profoundly inhibits the dFSB, reducing sleep.
That answers the first question, Sehgal explains: Young flies make less dopamine, which keeps the dFSB active and sleep levels high. These animals sleep more than adults and are harder to rouse from sleep.
Some clues to the second question – what is the consequence of sleep loss – came from Kayser’s finding that increased dopamine in young flies not only causes sleep loss, but also affects their ability to court when they’re older. “The flies spend less time courting, and those that do usually don’t make it all the way to the end,” Sehgal says.
To address whether sleep loss in young flies affects development of courtship circuits, the team investigated a group of neurons implicated in courtship. One particular subset of those neurons, localized in a specific brain region called VA1v, was smaller in sleep-deprived animals than normal flies, suggesting a possible mechanism for how sleep deprivation can lead to altered courting behavior.
That sleep-deprived flies have altered behavior is not itself a novel finding, Sehgal notes. Earlier studies from her lab and others used mechanical disruption to alter sleep patterns, but in the current study, Sehgal’s team was able to drill down to the specific neural network that is affected. “We identified the circuit that is less active in young flies. If you activate that circuit, you disrupt courtship by impairing the development of a different, courtship-relevant circuit.”
The question now is how these findings relate to human behavior – Kayser’s original question. Though no direct lines can be drawn, the study “does provide the first mechanistic link between sleep in early life and adult behavior,” says Sehgal.

Study Connects Sleep Deficits Among Young Fruitflies to Disruption in Mating Later in Life

Mom always said you need your sleep, and it turns out, she was right. According to a new study published in Science this week from researchers at the Perelman School of Medicine at the University of Pennsylvania, lack of sleep in young fruit flies profoundly diminishes their ability to do one thing they do really, really well – make more flies.

The study, led by Amita Sehgal PhD, professor of Neuroscience and a Howard Hughes Medical Institute (HHMI) Investigator, links sleep disruption in newborn fruit flies with a critical adult behavior: courtship and mating.

The team, addressed sleep in the very youngest of flies. “These flies sleep considerably more than adults and that behavior repeats across the animal kingdom,” Sehgal says. “Infant humans, rats, and flies, they all sleep a lot.”

Co-author Matthew Kayser, MD, PhD, in the Department of Psychiatry and Center for Sleep and Circadian Neurobiology, whose research centers on the link between sleep disruption and human neuropsychiatric diseases, used the fly – which is far more genetically pliant than mammals — to ask two basic questions: Why do young animals sleep so much? And, what is the implication of altering those patterns?

The team used genetically manipulated flies to show that young flies normally produce relatively little dopamine – a wake-promoting neurotransmitter — in certain neural circuits that feed into the sleep-promoting brain region called the dorsal fan-shaped body (dFSB). Premature activation of those circuits profoundly inhibits the dFSB, reducing sleep.

That answers the first question, Sehgal explains: Young flies make less dopamine, which keeps the dFSB active and sleep levels high. These animals sleep more than adults and are harder to rouse from sleep.

Some clues to the second question – what is the consequence of sleep loss – came from Kayser’s finding that increased dopamine in young flies not only causes sleep loss, but also affects their ability to court when they’re older. “The flies spend less time courting, and those that do usually don’t make it all the way to the end,” Sehgal says.

To address whether sleep loss in young flies affects development of courtship circuits, the team investigated a group of neurons implicated in courtship. One particular subset of those neurons, localized in a specific brain region called VA1v, was smaller in sleep-deprived animals than normal flies, suggesting a possible mechanism for how sleep deprivation can lead to altered courting behavior.

That sleep-deprived flies have altered behavior is not itself a novel finding, Sehgal notes. Earlier studies from her lab and others used mechanical disruption to alter sleep patterns, but in the current study, Sehgal’s team was able to drill down to the specific neural network that is affected. “We identified the circuit that is less active in young flies. If you activate that circuit, you disrupt courtship by impairing the development of a different, courtship-relevant circuit.”

The question now is how these findings relate to human behavior – Kayser’s original question. Though no direct lines can be drawn, the study “does provide the first mechanistic link between sleep in early life and adult behavior,” says Sehgal.

Filed under fruit flies mating dorsal fan-shaped body sleep sleep deprivation neuroscience science

87 notes

Research sheds new light on impact of diabetes on the brain
The new findings published in the Diabetes Care journal reveal the extent of damage patients suffering with the disease can endure in areas of the brain called ‘grey matter’ – a key component of the central nervous system which is involved in touch and pain sensory perception.
During the study, which involved patients with Type 1 and Type 2 diabetes, researchers used recent advances in ground breaking brain imaging and analyses methods to take detailed nerve assessments of the brain using magnetic resonance imaging (MRI) techniques.
This revealed that the volume of certain brain regions in people with diabetic neuropathy was significantly lower compared to those without the disease. Previous studies have shown that the impact of the disease on the brain is limited and isolated to outside areas of the brain considered to be peripheral to core functions in the body.
The breakthrough could pave the way for better assessment and monitoring of the disease, which affects around a third of people with diabetes. This, in turn, could lead to better treatments for sufferers in the future.
Read more

Research sheds new light on impact of diabetes on the brain

The new findings published in the Diabetes Care journal reveal the extent of damage patients suffering with the disease can endure in areas of the brain called ‘grey matter’ – a key component of the central nervous system which is involved in touch and pain sensory perception.

During the study, which involved patients with Type 1 and Type 2 diabetes, researchers used recent advances in ground breaking brain imaging and analyses methods to take detailed nerve assessments of the brain using magnetic resonance imaging (MRI) techniques.

This revealed that the volume of certain brain regions in people with diabetic neuropathy was significantly lower compared to those without the disease. Previous studies have shown that the impact of the disease on the brain is limited and isolated to outside areas of the brain considered to be peripheral to core functions in the body.

The breakthrough could pave the way for better assessment and monitoring of the disease, which affects around a third of people with diabetes. This, in turn, could lead to better treatments for sufferers in the future.

Read more

Filed under diabetes gray matter CNS diabetic peripheral neuropathy brain structure

164 notes

Study finds modified stem cells offer potential pathway to treat Alzheimer’s disease
UC Irvine neurobiologists have found that genetically modified neural stem cells show positive results when transplanted into the brains of mice with the symptoms and pathology of Alzheimer’s disease. The pre-clinical trial is published in the journal Stem Cells Research and Therapy, and the approach has been shown to work in two different mouse models.
Alzheimer’s disease, one of the most common forms of dementia, is associated with accumulation of the protein amyloid-beta in the brain in the form of plaques. While the search continues for a viable treatment, scientists are now looking into non-pharmaceutical ways to slow onset of this disease.
One option being considered is increasing the production of the enzyme neprilysin, which breaks down amyloid-beta, and shows lower activity in the brains of people with Alzheimer’s disease. Researchers from UC Irvine investigated the potential of decreasing amyloid-beta by delivering neprilysin to mice brains.
“Studies suggest that neprilysin decreases with age and may therefore influence the risk of Alzheimer’s disease,” said Mathew Blurton-Jones, an assistant professor of neurobiology & behavior. “If amyloid accumulation is the driving cause of Alzheimer’s disease, then therapies that either decrease amyloid-beta production or increase its degradation could be beneficial, especially if they are started early enough.”
The brain is protected by a system called the blood-brain-barrier that restricts access of cells, proteins, and drugs to the brain. While the blood-brain-barrier is important for brain health, it also makes it challenging to deliver therapeutic proteins or drugs to the brain. To overcome this, the researchers hypothesized that stem cells could act as an effective delivery vehicle. To test this hypothesis the brains of two different mouse models (3xTg-AD and Thy1-APP) were injected with genetically modified neural stem cells that over-expressed neprilysin.
These genetically modified stem cells were found to produce 25-times more neprilysin than control neural stem cells, but were otherwise equivalent to the control cells. The genetically modified and control stem cells were then transplanted into the hippocampus or subiculum of the mice brains – two areas of the brain that are greatly affected by Alzheimer’s disease. The mice transplanted with genetically modified stem cells were found to have a significant reduction in amyloid-beta plaques within their brains compared to the controls. The effect remained even one month after stem cell transplantation. This new approach could provide a significant advantage over unmodified neural stem cells because neprilysin-expressing cells could not only promote the growth of brain connections but could also target and reduce amyloid-beta pathology.
Before this can be investigated in humans, more work needs to be done to see if this affects the accumulation of soluble forms of amyloid-beta. Further investigation is also needed to determine whether this new approach improves cognition more than the transplantation of un-modified neural stem cells.
“Every mouse model of Alzheimer’s disease is different and develops varying amounts, distribution, and types of amyloid-beta pathology,” Blurton-Jones said. “By studying the same question in two independent transgenic models, we can increase our confidence that these results are meaningful and applicable to Alzheimer’s disease. But there is clearly a great deal more research needed to determine whether this kind of approach could eventually be translated to the clinic.”

Study finds modified stem cells offer potential pathway to treat Alzheimer’s disease

UC Irvine neurobiologists have found that genetically modified neural stem cells show positive results when transplanted into the brains of mice with the symptoms and pathology of Alzheimer’s disease. The pre-clinical trial is published in the journal Stem Cells Research and Therapy, and the approach has been shown to work in two different mouse models.

Alzheimer’s disease, one of the most common forms of dementia, is associated with accumulation of the protein amyloid-beta in the brain in the form of plaques. While the search continues for a viable treatment, scientists are now looking into non-pharmaceutical ways to slow onset of this disease.

One option being considered is increasing the production of the enzyme neprilysin, which breaks down amyloid-beta, and shows lower activity in the brains of people with Alzheimer’s disease. Researchers from UC Irvine investigated the potential of decreasing amyloid-beta by delivering neprilysin to mice brains.

“Studies suggest that neprilysin decreases with age and may therefore influence the risk of Alzheimer’s disease,” said Mathew Blurton-Jones, an assistant professor of neurobiology & behavior. “If amyloid accumulation is the driving cause of Alzheimer’s disease, then therapies that either decrease amyloid-beta production or increase its degradation could be beneficial, especially if they are started early enough.”

The brain is protected by a system called the blood-brain-barrier that restricts access of cells, proteins, and drugs to the brain. While the blood-brain-barrier is important for brain health, it also makes it challenging to deliver therapeutic proteins or drugs to the brain. To overcome this, the researchers hypothesized that stem cells could act as an effective delivery vehicle. To test this hypothesis the brains of two different mouse models (3xTg-AD and Thy1-APP) were injected with genetically modified neural stem cells that over-expressed neprilysin.

These genetically modified stem cells were found to produce 25-times more neprilysin than control neural stem cells, but were otherwise equivalent to the control cells. The genetically modified and control stem cells were then transplanted into the hippocampus or subiculum of the mice brains – two areas of the brain that are greatly affected by Alzheimer’s disease. The mice transplanted with genetically modified stem cells were found to have a significant reduction in amyloid-beta plaques within their brains compared to the controls. The effect remained even one month after stem cell transplantation. This new approach could provide a significant advantage over unmodified neural stem cells because neprilysin-expressing cells could not only promote the growth of brain connections but could also target and reduce amyloid-beta pathology.

Before this can be investigated in humans, more work needs to be done to see if this affects the accumulation of soluble forms of amyloid-beta. Further investigation is also needed to determine whether this new approach improves cognition more than the transplantation of un-modified neural stem cells.

“Every mouse model of Alzheimer’s disease is different and develops varying amounts, distribution, and types of amyloid-beta pathology,” Blurton-Jones said. “By studying the same question in two independent transgenic models, we can increase our confidence that these results are meaningful and applicable to Alzheimer’s disease. But there is clearly a great deal more research needed to determine whether this kind of approach could eventually be translated to the clinic.”

Filed under alzheimer's disease dementia stem cells blood-brain barrier neprilysin medicine neuroscience science

43 notes

(Image caption:The image depicts mice having a normal nerve (left) as compared to an incomplete nerve, a condition resulting in permanent downward gaze in both mice and humans. Image courtesy of Jeremy Duncan)
Researchers track down cause of eye mobility disorder
Imagine you cannot move your eyes up, and you cannot lift your upper eyelid. You walk through life with your head tilted upward so that your eyes look straight when they are rolled down in the eye socket. Obviously, such a condition should be corrected to allow people a normal position of their head. In order to correct this condition, one would need to understand why this happens.
In a paper published in the April 16 print issue of the journal Neuron, University of Iowa researchers Bernd Fritzsch and Jeremy Duncan and their colleagues at Harvard Medical School, along with investigator and corresponding author Elizabeth Engle, describe how their studies on mutated mice mimic human mutations.
It all started when Engle, a researcher at the Howard Hughes Medical Institute (HHMI), and Fritzsch, professor and departmental executive officer in the UI College of Liberal Arts and Sciences Department of Biology, began their interaction on the stimulation of eye muscles by their nerves, or “innervation,” around 20 years ago.
Approximately 10 years ago, Engle had identified the mutated genes in several patients with the eye movement disorder and subsequently developed a mouse with the same mutation she had identified in humans. However, while the effect on eye muscle innervation was comparable, there still was no clue as to why this should happen.
Fritzsch and his former biology doctoral student, Jeremy Duncan, worked with the Harvard researchers on a developmental study to find the point at which normal development of eye muscle innervations departs from the mutants. To their surprise, it happened very early in development. In fact, they found—only in mutant mice—a unique swelling in one of the nerves to the eye muscle.
More detailed analysis showed that these swellings came about because fibers extending to the eyes from the brain tried to leave the nerve as if they were already in the orbit, or eye socket. Since it happened so early, the researchers reasoned that something must be transported more effectively by this mutation to the motor neurons trying to reach the orbit and the eye muscles; something must be causing these motor neurons to assume they have already reached their target, the orbit of the eye.
To verify this enhanced function, the researchers developed another mouse that lacked the specific protein and found no defects in muscle innervation. Moreover, when they bred mice that carried malformed proteins with those that had none of these proteins, the mice developed a normal innervation.
This data provided clear evidence of what was going wrong and why, but it did not provide a clue as to the possible product that was more effectively transported in the mutant mice and, by logical extension, in humans. Further analysis revealed that breeding their mutant mice with another mutant having eye muscle innervation defects could enhance the effect of either mutation.
With this finding, they had identified the mutated protein, its enhanced function, and at least some of the likely cargo transported by this protein to allow normal innervation of eye muscles. This data provides the necessary level of understanding to design rational approaches to block the defect from developing.
Knowing what goes wrong and at what time during development can allow the problem to be corrected before it develops through proper manipulations. Engle, Fritzsch, and their collaborators currently are designing new approaches to rescue normal innervation in mice. In the future, their work may help families carrying such genetic mutations to have children with normal eye movement.

(Image caption:The image depicts mice having a normal nerve (left) as compared to an incomplete nerve, a condition resulting in permanent downward gaze in both mice and humans. Image courtesy of Jeremy Duncan)

Researchers track down cause of eye mobility disorder

Imagine you cannot move your eyes up, and you cannot lift your upper eyelid. You walk through life with your head tilted upward so that your eyes look straight when they are rolled down in the eye socket. Obviously, such a condition should be corrected to allow people a normal position of their head. In order to correct this condition, one would need to understand why this happens.

In a paper published in the April 16 print issue of the journal Neuron, University of Iowa researchers Bernd Fritzsch and Jeremy Duncan and their colleagues at Harvard Medical School, along with investigator and corresponding author Elizabeth Engle, describe how their studies on mutated mice mimic human mutations.

It all started when Engle, a researcher at the Howard Hughes Medical Institute (HHMI), and Fritzsch, professor and departmental executive officer in the UI College of Liberal Arts and Sciences Department of Biology, began their interaction on the stimulation of eye muscles by their nerves, or “innervation,” around 20 years ago.

Approximately 10 years ago, Engle had identified the mutated genes in several patients with the eye movement disorder and subsequently developed a mouse with the same mutation she had identified in humans. However, while the effect on eye muscle innervation was comparable, there still was no clue as to why this should happen.

Fritzsch and his former biology doctoral student, Jeremy Duncan, worked with the Harvard researchers on a developmental study to find the point at which normal development of eye muscle innervations departs from the mutants. To their surprise, it happened very early in development. In fact, they found—only in mutant mice—a unique swelling in one of the nerves to the eye muscle.

More detailed analysis showed that these swellings came about because fibers extending to the eyes from the brain tried to leave the nerve as if they were already in the orbit, or eye socket. Since it happened so early, the researchers reasoned that something must be transported more effectively by this mutation to the motor neurons trying to reach the orbit and the eye muscles; something must be causing these motor neurons to assume they have already reached their target, the orbit of the eye.

To verify this enhanced function, the researchers developed another mouse that lacked the specific protein and found no defects in muscle innervation. Moreover, when they bred mice that carried malformed proteins with those that had none of these proteins, the mice developed a normal innervation.

This data provided clear evidence of what was going wrong and why, but it did not provide a clue as to the possible product that was more effectively transported in the mutant mice and, by logical extension, in humans. Further analysis revealed that breeding their mutant mice with another mutant having eye muscle innervation defects could enhance the effect of either mutation.

With this finding, they had identified the mutated protein, its enhanced function, and at least some of the likely cargo transported by this protein to allow normal innervation of eye muscles. This data provides the necessary level of understanding to design rational approaches to block the defect from developing.

Knowing what goes wrong and at what time during development can allow the problem to be corrected before it develops through proper manipulations. Engle, Fritzsch, and their collaborators currently are designing new approaches to rescue normal innervation in mice. In the future, their work may help families carrying such genetic mutations to have children with normal eye movement.

Filed under eye mobility disorder eye movements genetic mutation innervation motor neurons neuroscience science

58 notes

(Image caption: Newly discovered neuron type (yellow) helps zebrafish to coordinate its eye and swimming movements. The image shows the blue-stained brain of a fish larva with the suggested position of the eyes. Credit: © Max Planck Institute of Neurobiology/Kubo) 
How vision makes sure that little fish do not get carried away
Our eyes not only enable us to recognise objects; they also provide us with a continuous stream of information about our own movements. Whether we run, turn around, fall or sit still in a car – the world glides by us and leaves a characteristic motion trace on our retinas. Seemingly without effort, our brain calculates self-motion from this “optic flow”. This way, we can maintain a stable position and a steady gaze during our own movements. Together with biologists from the University of Freiburg, scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich have now discovered an array of new types of neurons, which help the brain of zebrafish to perceive, and compensate for, self-motion.
Read more

(Image caption: Newly discovered neuron type (yellow) helps zebrafish to coordinate its eye and swimming movements. The image shows the blue-stained brain of a fish larva with the suggested position of the eyes. Credit: © Max Planck Institute of Neurobiology/Kubo)

How vision makes sure that little fish do not get carried away

Our eyes not only enable us to recognise objects; they also provide us with a continuous stream of information about our own movements. Whether we run, turn around, fall or sit still in a car – the world glides by us and leaves a characteristic motion trace on our retinas. Seemingly without effort, our brain calculates self-motion from this “optic flow”. This way, we can maintain a stable position and a steady gaze during our own movements. Together with biologists from the University of Freiburg, scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich have now discovered an array of new types of neurons, which help the brain of zebrafish to perceive, and compensate for, self-motion.

Read more

Filed under zebrafish neurons neural circuits vision movement optic flow neuroscience science

free counters