Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

85 notes

Study points to possible treatment for brain disorders
Clemson University scientists are working to determine how neurons are generated, which is vital to providing treatment for neurological disorders like Tuberous Sclerosis Complex (TSC).
TSC is a rare genetic disease that causes the growth of tumors in the brain and other vital organs and may indicate such disorders as autism, epilepsy and cognitive impairment that may arise from the abnormal generation of neurons.
“Current medicine is directed at inhibiting the mammalian target of rapamycin (mTOR), a common feature within these tumors that have abnormally high activity,” said David M. Feliciano, assistant professor of biological sciences. “However, current treatments have severe side effects, likely due to mTOR’s many functions and playing an important role in cell survival, growth and migration.”
Feliciano and colleagues published their findings in journal Cell Reports.
“Neural stem cells generate the primary communicating cells of the brain called neurons through the process of neurogenesis, yet how this is orchestrated is unknown,” said Feliciano.
The stem cells lie at the core of brain development and repair, and alterations in the cells’ self-renewal and differentiation can have major consequences for brain function at any stage of life, according to researchers.
To better understand the process of neurogenesis, the researchers used a genetic approach known as neonatal electroporation to deliver pieces of DNA into neural stem cells in young mice, which allowed them to express and control specific components of the mTOR pathway.
The researchers found that when they increase activity of the mTOR pathway, neural stem cells make neurons at the expense of making more stem cells. They also found that this phenomenon is linked to a specific mTOR target known as 4E-BP2, which regulates the production of proteins. 
Ultimately, this study points to a possible new treatment, 4E-BP2, for neurodevelopmental disorders like TSC and may have fewer side effects.
Future experiments are aimed at identifying which proteins are synthesized due to this pathway in neurological disorders.

Study points to possible treatment for brain disorders

Clemson University scientists are working to determine how neurons are generated, which is vital to providing treatment for neurological disorders like Tuberous Sclerosis Complex (TSC).

TSC is a rare genetic disease that causes the growth of tumors in the brain and other vital organs and may indicate such disorders as autism, epilepsy and cognitive impairment that may arise from the abnormal generation of neurons.

“Current medicine is directed at inhibiting the mammalian target of rapamycin (mTOR), a common feature within these tumors that have abnormally high activity,” said David M. Feliciano, assistant professor of biological sciences. “However, current treatments have severe side effects, likely due to mTOR’s many functions and playing an important role in cell survival, growth and migration.”

Feliciano and colleagues published their findings in journal Cell Reports.

“Neural stem cells generate the primary communicating cells of the brain called neurons through the process of neurogenesis, yet how this is orchestrated is unknown,” said Feliciano.

The stem cells lie at the core of brain development and repair, and alterations in the cells’ self-renewal and differentiation can have major consequences for brain function at any stage of life, according to researchers.

To better understand the process of neurogenesis, the researchers used a genetic approach known as neonatal electroporation to deliver pieces of DNA into neural stem cells in young mice, which allowed them to express and control specific components of the mTOR pathway.

The researchers found that when they increase activity of the mTOR pathway, neural stem cells make neurons at the expense of making more stem cells. They also found that this phenomenon is linked to a specific mTOR target known as 4E-BP2, which regulates the production of proteins. 

Ultimately, this study points to a possible new treatment, 4E-BP2, for neurodevelopmental disorders like TSC and may have fewer side effects.

Future experiments are aimed at identifying which proteins are synthesized due to this pathway in neurological disorders.

Filed under tuberous sclerosis complex neurons brain mapping genetics neuroscience science

73 notes

Major Alzheimer’s Risk Factor Linked to Red Wine Target

Buck Institute study provides insight for new therapeutics that target the interaction between ApoE4 and a Sirtuin protein

The major genetic risk factor for Alzheimer’s disease (AD), present in about two-thirds of people who develop the disease, is ApoE4, the cholesterol-carrying protein that about a quarter of us are born with. But one of the unsolved mysteries of AD is how ApoE4 causes the risk for the incurable, neurodegenerative disease. In research published this week in The Proceedings of the National Academy of Sciences, researchers at the Buck Institute found a link between ApoE4 and SirT1, an “anti-aging protein” that is targeted by resveratrol, present in red wine.

The Buck researchers found that ApoE4 causes a dramatic reduction in SirT1, which is one of seven human Sirtuins. Lead scientists Rammohan Rao, PhD, and Dale Bredesen, MD, founding CEO of the Buck Institute, say the reduction was found both in cultured neural cells and in brain samples from patients with ApoE4 and AD. “The biochemical mechanisms that link ApoE4 to Alzheimer’s disease have been something of a black box. However, recent work from a number of labs, including our own, has begun to open the box,” said Bredesen.

The Buck group also found that the abnormalities associated with ApoE4 and AD, such as the creation of phospho-tau and amyloid-beta, could be prevented by increasing SirT1. They have identified drug candidates that exert the same effect. “This research offers a new type of screen for Alzheimer’s prevention and treatment,” said Rammohan V. Rao, PhD, co-author of the study, and an Associate Research Professor at the Buck. “One of our goals is to identify a safe, non-toxic treatment that could be given to anyone who carries the ApoE4 gene to prevent the development of AD.”

In particular, the researchers discovered that the reduction in SirT1 was associated with a change in the way the amyloid precursor protein (APP) is processed. Rao said that ApoE4 favored the formation of the amyloid-beta peptide that is associated with the sticky plaques that are one of the hallmarks of the disease. He said with ApoE3 (which confers no increased risk of AD), there was a higher ratio of the anti-Alzheimer’s peptide, sAPP alpha, produced, in comparison to the pro-Alzheimer’s amyloid-beta peptide. This finding fits very well with the reduction in SirT1, since overexpressing SirT1 has previously been shown to increase ADAM10, the protease that cleaves APP to produce sAPP alpha and prevent amyloid-beta.

AD affects over 5 million Americans – there are no treatments that are known to cure, or even halt the progression of symptoms that include loss of memory and language. Preventive treatments are particularly needed for the 2.5% of the population that carry two genes for ApoE4, which puts them at an approximate 10-fold higher risk of developing AD, as well as for the 25% of the population with a single copy of the gene. The group hopes that the current work will identify simple, safe therapeutics that can be given to ApoE4 carriers to prevent the development of Alzheimer’s disease.

(Source: buckinstitute.org)

Filed under alzheimer's disease dementia resveratrol ApoE4 SirT1 amyloid beta genetics neuroscience science

87 notes

Shorter Sleep Duration and Poorer Sleep Quality Linked to Alzheimer’s Disease Biomarker

Poor sleep quality may impact Alzheimer’s disease onset and progression. This is according to a new study led by researchers at the Johns Hopkins Bloomberg School of Public Health who examined the association between sleep variables and a biomarker for Alzheimer’s disease in older adults. The researchers found that reports of shorter sleep duration and poorer sleep quality were associated with a greater β-Amyloid burden, a hallmark of the disease. The results are featured online in the October issue of JAMA Neurology.

“Our study found that among older adults, reports of shorter sleep duration and poorer sleep quality were associated with higher levels of β-Amyloid measured by PET scans of the brain,” said Adam Spira, PhD, lead author of the study and an assistant professor with the Bloomberg School’s Department of Mental Health. “These results could have significant public health implications as Alzheimer’s disease is the most common cause of dementia, and approximately half of older adults have insomnia symptoms.”

Alzheimer’s disease is an irreversible, progressive brain disease that slowly destroys memory and thinking skills. According to the National Institutes of Health, as many as 5.1 million Americans may have the disease, with first symptoms appearing after age 60. Previous studies have linked disturbed sleep to cognitive impairment in older people.

In a cross-sectional study of adults from the neuro-imagining sub-study of the Baltimore Longitudinal Study of Aging with an average age of 76, the researchers examined the association between self-reported sleep variables and β-Amyloid deposition. Study participants reported sleep that ranged from more than seven hours to no more than five hours. β-Amyloid deposition was measured by the Pittsburgh compound B tracer and PET (positron emission tomography) scans of the brain. Reports of shorter sleep duration and lower sleep quality were both associated with greater Αβ buildup.

“These findings are important in part because sleep disturbances can be treated in older people. To the degree that poor sleep promotes the development of Alzheimer’s disease, treatments for poor sleep or efforts to maintain healthy sleep patterns may help prevent or slow the progression of Alzheimer disease,” said Spira.  He added that the findings cannot demonstrate a causal link between poor sleep and Alzheimer’s disease, and that longitudinal studies with objective sleep measures are needed to further examine whether poor sleep contributes to or accelerates Alzheimer’s disease.

(Source: jhsph.edu)

Filed under alzheimer's disease dementia sleep neuroimaging beta amyloid insomnia neuroscience science

182 notes

Learning New Skills Keeps an Aging Mind Sharp
Older adults are often encouraged to stay active and engaged to keep their minds sharp, that they have to “use it or lose it.” But new research indicates that only certain activities — learning a mentally demanding skill like photography, for instance — are likely to improve cognitive functioning.
These findings, forthcoming in Psychological Science, a journal of the Association for Psychological Science, reveal that less demanding activities, such as listening to classical music or completing word puzzles, probably won’t bring noticeable benefits to an aging mind.
“It seems it is not enough just to get out and do something—it is important to get out and do something that is unfamiliar and mentally challenging, and that provides broad stimulation mentally and socially,” says psychological scientist and lead researcher Denise Park of the University of Texas at Dallas. “When you are inside your comfort zone you may be outside of the enhancement zone.”
The new findings provide much-needed insight into the components of everyday activities that contribute to cognitive vitality as we age.
“We need, as a society, to learn how to maintain a healthy mind, just like we know how to maintain vascular health with diet and exercise,” says Park. “We know so little right now.”
For their study, Park and colleagues randomly assigned 221 adults, ages 60 to 90, to engage in a particular type of activity for 15 hours a week over the course of three months.
Some participants were assigned to learn a new skill — digital photography, quilting, or both — which required active engagement and tapped working memory, long-term memory and other high-level cognitive processes.
Other participants were instructed to engage in more familiar activities at home, such as listening to classical music and completing word puzzles. And, to account for the possible influence of social contact, some participants were assigned to a social group that included social interactions, field trips, and entertainment.
At the end of three months, Park and colleagues found that the adults who were productively engaged in learning new skills showed improvements in memory compared to those who engaged in social activities or non-demanding mental activities at home.
“The findings suggest that engagement alone is not enough,” says Park. “The three learning groups were pushed very hard to keep learning more and mastering more tasks and skills. Only the groups that were confronted with continuous and prolonged mental challenge improved.”
The study is particularly noteworthy given that the researchers were able to systematically intervene in people’s lives, putting them in new environments and providing them with skills and relationships:
“Our participants essentially agreed to be assigned randomly to different lifestyles for three months so that we could compare how different social and learning environments affected the mind,” says Park. “People built relationships and learned new skills — we hope these are  gifts that keep on giving, and continue to be a source of engagement and  stimulation even after they finished the  study.”
Park and colleagues are planning on following up with the participants one year and five years down the road to see if the effects remain over the long term. They believe that the research has the potential to be profoundly important and relevant, especially as the number of seniors continues to rise:
“This is speculation, but what if challenging mental activity slows the rate at which the brain ages?” asks Park. “Every year that you save could be an added year of high quality life and independence.”
(Image credit)

Learning New Skills Keeps an Aging Mind Sharp

Older adults are often encouraged to stay active and engaged to keep their minds sharp, that they have to “use it or lose it.” But new research indicates that only certain activities — learning a mentally demanding skill like photography, for instance — are likely to improve cognitive functioning.

These findings, forthcoming in Psychological Science, a journal of the Association for Psychological Science, reveal that less demanding activities, such as listening to classical music or completing word puzzles, probably won’t bring noticeable benefits to an aging mind.

“It seems it is not enough just to get out and do something—it is important to get out and do something that is unfamiliar and mentally challenging, and that provides broad stimulation mentally and socially,” says psychological scientist and lead researcher Denise Park of the University of Texas at Dallas. “When you are inside your comfort zone you may be outside of the enhancement zone.”

The new findings provide much-needed insight into the components of everyday activities that contribute to cognitive vitality as we age.

“We need, as a society, to learn how to maintain a healthy mind, just like we know how to maintain vascular health with diet and exercise,” says Park. “We know so little right now.”

For their study, Park and colleagues randomly assigned 221 adults, ages 60 to 90, to engage in a particular type of activity for 15 hours a week over the course of three months.

Some participants were assigned to learn a new skill — digital photography, quilting, or both — which required active engagement and tapped working memory, long-term memory and other high-level cognitive processes.

Other participants were instructed to engage in more familiar activities at home, such as listening to classical music and completing word puzzles. And, to account for the possible influence of social contact, some participants were assigned to a social group that included social interactions, field trips, and entertainment.

At the end of three months, Park and colleagues found that the adults who were productively engaged in learning new skills showed improvements in memory compared to those who engaged in social activities or non-demanding mental activities at home.

“The findings suggest that engagement alone is not enough,” says Park. “The three learning groups were pushed very hard to keep learning more and mastering more tasks and skills. Only the groups that were confronted with continuous and prolonged mental challenge improved.”

The study is particularly noteworthy given that the researchers were able to systematically intervene in people’s lives, putting them in new environments and providing them with skills and relationships:

“Our participants essentially agreed to be assigned randomly to different lifestyles for three months so that we could compare how different social and learning environments affected the mind,” says Park. “People built relationships and learned new skills — we hope these are  gifts that keep on giving, and continue to be a source of engagement and  stimulation even after they finished the  study.”

Park and colleagues are planning on following up with the participants one year and five years down the road to see if the effects remain over the long term. They believe that the research has the potential to be profoundly important and relevant, especially as the number of seniors continues to rise:

“This is speculation, but what if challenging mental activity slows the rate at which the brain ages?” asks Park. “Every year that you save could be an added year of high quality life and independence.”

(Image credit)

Filed under aging cognitive function memory learning psychology neuroscience science

124 notes

2 genetic wrongs make a biochemical right

In a biological quirk that promises to provide researchers with a new approach for studying and potentially treating Fragile X syndrome, scientists at the University of Massachusetts Medical School (UMMS) have shown that knocking out a gene important for messenger RNA (mRNA) translation in neurons restores memory deficits and reduces behavioral symptoms in a mouse model of a prevalent human neurological disease. These results, published today in Nature Medicine, suggest that the prime cause of the Fragile X syndrome may be a translational imbalance that results in elevated protein production in the brain. Restoration of this balance may be necessary for normal neurological function.

"Biology works in strange ways," said Joel Richter, PhD, professor of molecular medicine at UMMS and senior author on the study. "We corrected one genetic mutation with another, which in effect showed that two wrongs make a right. Mutations in each gene result in impaired brain function, but in our studies, we found that mutations in both genes result in normal brain function. This sounds counter-intuitive, but in this case that seems to be what has happened."

Fragile X syndrome, the most common form of inherited mental retardation and the most frequent single-gene cause of autism, is a genetic condition resulting from a CGG repeat expansion in the DNA sequence of the Fragile X (Fmr1) gene required for normal neurological development. People with Fragile X suffer from intellectual disability as well as behavioral and learning challenges. Depending on the length of the CGG repeat, intellectual disabilities can range from mild to severe.

While scientists have identified the genetic mutation that causes Fragile X, on a molecular level they still don’t know much about how the disease works or what precisely goes wrong in the brain as a result. What is known is that the Fmr1 gene codes for the Fragile X protein (FMRP). This protein probably has several functions throughout the neuron but its main activity is to repress the translation of as many as 1,000 different mRNAs. By doing this, FMRP controls synaptic plasticity and higher brain function. Mice without the Fragile X gene, for instance, have a 15 to 20 percent overall elevation in neural protein production. It is thought that the inability to repress mRNA translation and the resulting increase in neural proteins may somehow hamper normal synaptic function in patients with Fragile X. But because FMRP binds so many mRNAs, and some proteins become more elevated than others, parsing which mRNA or combination of mRNAs is responsible for Fragile X pathology is a daunting task.

From Frog Egg to Fragile X

For years, Dr. Richter had been studying how translation, the process in which cellular ribosomes create proteins, went from dormant to active in frog eggs. He discovered the key gene controlling this process, the RNA binding protein CPEB. In 1998, Richter found the CPEB protein in the rodent brain where it played an important role in regulating how synapses talk to each other. At this point, his work began to move from exploring the role of CPEB in the developmental biology of the frog to how the CPEB protein impacted learning and memory. A serendipitous research symposium with colleagues at Cold Spring Harbor got him thinking about CPEB and Fragile X syndrome.

"Here I was, an outsider, a molecular biologist who had worked for years with frog eggs, in the same room with neurobiologists and neurologists, when they started talking about Fragile X syndrome and translational activity," said Richter. "It got me thinking that the CPEB protein might be a path to restoring the translational imbalance they were discussing."

Richter knew that CPEB stimulated translation and that FMRP repressed it. He also knew that animal models lacking the CPEB protein had memory deficits and that both proteins bound to many of the same mRNAs – the overlap may be as higher as 33 percent. The thought was that by taking away a protein that stimulated translation might counterbalance the loss of the repressor FMRP protein, thereby restoring translational homeostasis in the brain and normal neurological function.

"It was one of those kind of goofy ‘what if’ sort of things," said Richter.

To test his hypothesis, Richter developed a double knockout mouse model that lacked both the FMRP gene that caused Fragile X and the CPEB gene. When they began measuring for Fragile X pathologies what they found was almost too good to be true.

"We measured a host of factors, biochemical, morphological, electrophysiological and behavioral phenotypes," said Richter. "And we kept finding the same thing. By knocking out both the FMRP and CPEB genes we were able to restore levels of protein synthesis to normal and corrected the disease characteristics of the Fragile X mice, making them almost indistinguishable from wild type mice."

Most importantly, tests to evaluate short-term memory in the double knockout mice also showed normal results with no indications of Fragile X pathology. This suggested an experiment to test whether CPEB might be a potential therapeutic target for Fragile X to benefit patients. Richter and colleagues took adult Fragile X mice and injected a lentivirus that expresses a small RNA to knock down CPEB in the hippocampus, which is a brain region that is important for short-term memory. Subsequent tests showed improved short-term memory in these mice, indicating that at least this one characteristic of Fragile X syndrome, which is generally thought to be a developmental disorder, can be reversed in adults.

"People with Fragile X make too much protein," said Richter. "By using CPEB to recalibrate the cellular machinery that makes protein we’ve shown that tamping down this process has a profoundly good impact on mouse models with Fragile X. It may be that a similar approach could be beneficial for kids with this disease."

The next step for Richter and colleagues is to determine which, of the more than 300 mRNAs that both CPEB and FMRP bind to, contribute to Fragile X syndrome and how. They’ll also begin looking at small molecules and other avenues that, like the ablation of the CPEB protein, might be able to slow down the synthesis of protein. “There are several small molecules that we know affect the translational apparatus,” Richter said. “Some cross the blood/brain barrier, some are toxic, and some are not. We’d like to investigate those.”

"This is another, great example of how basic science translates to human disease," said Richter. "If we had started out looking at the human brain, not knowing about the CPEB protein and its role in translational activity, we wouldn’t have had any idea where to start or what to look for. But because we started out in the frog, where things are much easier to see, and because more often than not these processes are conserved, we’ve learned something new and totally unexpected that may have a profound impact on human disease."

(Source: eurekalert.org)

Filed under fragile x syndrome genetic mutations Fmr1 gene genetics neuroscience science

68 notes

Rats! Humans and rodents face their errors 
What happens when the brain recognizes an error? A new study shows that the brains of humans and rats adapt in a similar way to errors by using low-frequency brainwaves in the medial frontal cortex to synchronize neurons in the motor cortex. The finding could be important in studies of “adaptive control” like obsessive compulsive disorder, ADHD, and Parkinson’s.
People and rats may think alike when they’ve made a mistake and are trying to adjust their thinking.
That’s the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.
“These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex,” wrote the authors, who performed the research at Brown University and Yale University.
The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such “adaptive control” neural mechanics are compromised in psychiatric diseases.
“With this rat model of adaptive control, we are now able to examine whether novel drugs or other treatment procedures boost the integrity of this system,” said James Cavanagh, co-lead author of the paper who was at Brown when the research was done and has since become assistant professor of psychology at the University of New Mexico. “This may have clear translational potential for treating psychiatric diseases such as obsessive compulsive disorder, depression, attention deficit hyperactivity disorder, Parkinson’s disease and schizophrenia.”
To conduct the study, the researchers measured external brainwaves of human and rodent subjects after both erroneous and accurate performance on the time estimation task. They also measured the activity of individual neurons in the MFC and motor cortex of the rats in both post-error and post-correct circumstances.
The scientists also gave the rats a drug that blocked activity of the MFC. What they saw in those rats compared to rats who didn’t get the drug, was that the low-frequency waves did not occur in the motor cortex, neurons there did not fire coherently and the rats did not alter their subsequent behavior on the task.
Although the researchers were able to study the cognitive mechanisms in the rats in more detail than in humans, the direct parallels they saw in the neural mechanics of adaptive control were significant.
“Low-frequency oscillations facilitate synchronization among brain networks for representing and exerting adaptive control, including top-down regulation of behavior in the mammalian brain,” they wrote.

Rats! Humans and rodents face their errors

What happens when the brain recognizes an error? A new study shows that the brains of humans and rats adapt in a similar way to errors by using low-frequency brainwaves in the medial frontal cortex to synchronize neurons in the motor cortex. The finding could be important in studies of “adaptive control” like obsessive compulsive disorder, ADHD, and Parkinson’s.

People and rats may think alike when they’ve made a mistake and are trying to adjust their thinking.

That’s the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.

“These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex,” wrote the authors, who performed the research at Brown University and Yale University.

The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such “adaptive control” neural mechanics are compromised in psychiatric diseases.

“With this rat model of adaptive control, we are now able to examine whether novel drugs or other treatment procedures boost the integrity of this system,” said James Cavanagh, co-lead author of the paper who was at Brown when the research was done and has since become assistant professor of psychology at the University of New Mexico. “This may have clear translational potential for treating psychiatric diseases such as obsessive compulsive disorder, depression, attention deficit hyperactivity disorder, Parkinson’s disease and schizophrenia.”

To conduct the study, the researchers measured external brainwaves of human and rodent subjects after both erroneous and accurate performance on the time estimation task. They also measured the activity of individual neurons in the MFC and motor cortex of the rats in both post-error and post-correct circumstances.

The scientists also gave the rats a drug that blocked activity of the MFC. What they saw in those rats compared to rats who didn’t get the drug, was that the low-frequency waves did not occur in the motor cortex, neurons there did not fire coherently and the rats did not alter their subsequent behavior on the task.

Although the researchers were able to study the cognitive mechanisms in the rats in more detail than in humans, the direct parallels they saw in the neural mechanics of adaptive control were significant.

“Low-frequency oscillations facilitate synchronization among brain networks for representing and exerting adaptive control, including top-down regulation of behavior in the mammalian brain,” they wrote.

Filed under motor cortex medial frontal cortex neurons psychiatric disorders brainwaves rodents animal model neuroscience science

81 notes

Neuron ‘claws’ in the brain enable flies to distinguish one scent from another
Think of the smell of an orange, a lemon, and a grapefruit. Each has strong acidic notes mixed with sweetness. And yet each fresh, bright scent is distinguishable from its relatives. These fruits smell similar because they share many chemical compounds. How, then does the brain tell them apart? How does the brain remember a complex and often overlapping chemical signature as a particular scent? 
Researchers at Cold Spring Harbor Laboratory (CSHL) are using the fruit fly to discover how the brain integrates multiple signals to identify one unique smell. It’s work that has a broader implication for how flies – and ultimately, people – learn. In work published today in Nature Neuroscience, a team led by Associate Professor Glenn Turner describes how a group of neurons in the fruit fly brain recognize multiple individual chemicals in combination in order to define, or remember, a single scent.
The olfactory system of a fruit fly begins at the equivalent of our nose, where a series of neurons sense and respond to very specific chemicals. These neurons pass their signal on to a group of cells called projection neurons. Then the signal undergoes a transformation as it is passed to a body of neurons in the fly brain called Kenyon cells.
Kenyon cells have multiple, extremely large protrusions that grasp the projection neurons with a claw-like structure. Each Kenyon cell claw is wrapped tightly around only one projection neuron, meaning that it receives a signal from just one type of input. In addition to their unique structure, Kenyon cells are also remarkable for their selectivity.  Because they’re selective, they aren’t often activated. Yet little is known about what in fact makes them decide to fire a signal.
Turner and colleague Eyal Gruntman, who is lead author on their new paper, used cutting-edge microscopy to explore the chemical response profile for multiple claws on one Kenyon cell. They found that each claw, even on a single Kenyon cell, responded to different odor molecules. Additional experiments using light to stimulate individual neurons (a technique called optogenetics) revealed that single Kenyon cells were only activated when several of their claws were simultaneously stimulated, explaining why they so rarely fire. Taken together, this work explains how individual Kenyon cells can integrate multiple signals in the brain to “remember” the particular chemical mixture as a single, distinct odor.
Turner will next try to determine “what controls which claws are connected, and how strong those connections are.” This will provide insight into how the brain learns to assign a specific mix of chemicals as defining a particular scent. But beyond simple odor detection, the research has more general implications for learning. For Turner, the question driving his work forward is: what in the brain changes when you learn something?

Neuron ‘claws’ in the brain enable flies to distinguish one scent from another

Think of the smell of an orange, a lemon, and a grapefruit. Each has strong acidic notes mixed with sweetness. And yet each fresh, bright scent is distinguishable from its relatives. These fruits smell similar because they share many chemical compounds. How, then does the brain tell them apart? How does the brain remember a complex and often overlapping chemical signature as a particular scent? 

Researchers at Cold Spring Harbor Laboratory (CSHL) are using the fruit fly to discover how the brain integrates multiple signals to identify one unique smell. It’s work that has a broader implication for how flies – and ultimately, people – learn. In work published today in Nature Neuroscience, a team led by Associate Professor Glenn Turner describes how a group of neurons in the fruit fly brain recognize multiple individual chemicals in combination in order to define, or remember, a single scent.

The olfactory system of a fruit fly begins at the equivalent of our nose, where a series of neurons sense and respond to very specific chemicals. These neurons pass their signal on to a group of cells called projection neurons. Then the signal undergoes a transformation as it is passed to a body of neurons in the fly brain called Kenyon cells.

Kenyon cells have multiple, extremely large protrusions that grasp the projection neurons with a claw-like structure. Each Kenyon cell claw is wrapped tightly around only one projection neuron, meaning that it receives a signal from just one type of input. In addition to their unique structure, Kenyon cells are also remarkable for their selectivity.  Because they’re selective, they aren’t often activated. Yet little is known about what in fact makes them decide to fire a signal.

Turner and colleague Eyal Gruntman, who is lead author on their new paper, used cutting-edge microscopy to explore the chemical response profile for multiple claws on one Kenyon cell. They found that each claw, even on a single Kenyon cell, responded to different odor molecules. Additional experiments using light to stimulate individual neurons (a technique called optogenetics) revealed that single Kenyon cells were only activated when several of their claws were simultaneously stimulated, explaining why they so rarely fire. Taken together, this work explains how individual Kenyon cells can integrate multiple signals in the brain to “remember” the particular chemical mixture as a single, distinct odor.

Turner will next try to determine “what controls which claws are connected, and how strong those connections are.” This will provide insight into how the brain learns to assign a specific mix of chemicals as defining a particular scent. But beyond simple odor detection, the research has more general implications for learning. For Turner, the question driving his work forward is: what in the brain changes when you learn something?

Filed under olfactory system fruit flies neurons Kenyon cells optogenetics neuroscience science

126 notes

Adolescence: When drinking and genes may collide

Many negative effects of drinking, such as transitioning into heavy alcohol use, often take place during adolescence and can contribute to long-term negative health outcomes as well as the development of alcohol use disorders. A new study of adolescent drinking and its genetic and environmental influences has found that different trajectories of adolescent drinking are preceded by discernible gene-parenting interactions, specifically, the mu-opioid receptor (OPRM1) genotype and parental-rule-setting.

image

Results will be published in the March 2014 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.

"Heavy drinking in adolescence can lead to alcohol-related problems and alcohol dependence later in life," said Carmen Van der Zwaluw, an assistant professor at Radboud University Nijmegen as well as corresponding author for the study. "It has been estimated that 40 percent of adult alcoholics were already heavy drinkers during adolescence. Thus, tackling heavy drinking in adolescence may prevent later alcohol-related problems."

Van der Zwaluw said that both the dopamine receptor D2 (DRD2) and OPRM1 genes are known to play a large role in the neuro-reward mechanisms associated with the feelings of pleasure that result from drinking, as well as from eating, having sex, and the use of other drugs.

"Different genotypes may result in different neural responses to alcohol or different motivations to drink," she said. "For example, OPRM1 G-allele carriers have been shown to experience more positive feelings after drinking, and to drink more often to enhance their mood than people with the OPRM1 AA genotype. In addition, we chose to examine the influence of parental alcohol-specific rules because research has shown that, more than general measures of parental monitoring, alcohol-specific rule-setting has a considerable and consistent effect on adolescents’ drinking behavior."

Van der Zwaluw and her colleagues used data from the Dutch Family and Health study that consisted of six yearly waves, beginning in 2002 and including only adolescents born in the Netherlands. The final sample of 596 adolescents (50% boys) were on average 14.3 years old at Time 1 (T1), 15.3 at T2, 16.3 at T3, 17.7 at T4, 18.7 years at T5, and 19.7 years at T6. Saliva samples were collected in the fourth wave to enable genetic testing. Participants were subsequently divided into three distinct groups of adolescent drinkers; light drinkers (n=346), moderate drinkers (n=178), and heavy drinkers (n=72).

"It was found that adolescent drinkers could be discriminated into three groups: light, moderate, and heavy drinkers," said Van der Zwaluw. "Comparisons between these three groups showed that light drinkers were more often carriers of the OPRM1 AA ‘non-risk’ genotype, and reported stricter parental rules than moderate drinkers. In the heavy drinking group, the G-allele carriers, but not those with the AA-genotype, were largely affected by parental rules: more rules resulted in lower levels of alcohol use."

Van der Zwaluw explained that although evidence for the genetic liability of heavy alcohol use has been shown repeatedly, debate continues over which genes are responsible for this liability, what the causal mechanisms are, and whether and how it interacts with environmental factors. “Longitudinal studies examining the development of alcohol use over time, in a stage of life that often precedes serious alcohol-related problems, can shed more light on these issues,” she said. “This paper confirms important findings of others; showing an association of the OPRM1 G-allele with adolescent alcohol use and an effect of parental rule-setting. Additionally, it adds to the literature by demonstrating that, depending on genotype, adolescents are differently affected by parental rules.”

The bottom line is that parents can be a positive influence, Van der Zwaluw noted. “This study shows that strict parental rules prevent youth from drinking more alcohol,” she said. “However, one should keep in mind that every adolescent responds differently to parenting efforts, and that the effects of parenting may depend on the genetic make-up of the adolescent.”

(Source: eurekalert.org)

Filed under adolescence alcohol genetics parenting neuroscience science

166 notes

How Subtle Movements and Facial Features Could Predict Your Demise

Princeton study shows that health assessments made by medically untrained interviewers can predict mortality of individuals better than those made by physicians or the individuals themselves

image

Features like the wrinkles on your forehead and the way you move may reflect your overall health and risk of dying, according to recent health research. But do physicians consider such details when assessing patients’ overall health and functioning?

In a survey of approximately 1,200 Taiwanese participants, Princeton University researchers found that interviewers — who were not health professionals but were trained to administer the survey — provided health assessments that were related to a survey participant’s risk of dying, in part because they were attuned to facial expressions, responsiveness and overall agility.

The researchers report in the journal Epidemiology that these assessments were even more accurate predictors of dying than assessments made by physicians or even the individuals themselves. The findings show that survey interviewers, who typically spend a fair amount of time observing participants, can glean important information regarding participants’ health through thorough observations.  

"Your face and body reveal a lot about your life. We speculate that a lot of information about a person’s health is reflected in their face, movements, speech and functioning, as well as in the information explicitly collected during interviews," said Noreen Goldman, Hughes-Rogers Professor of Demography and Public Affairs in the Woodrow Wilson School.

Together with lead author of the paper and Princeton Ph.D. candidate Megan Todd, Goldman analyzed data collected by the Social Environment and Biomarkers of Aging Study (SEBAS). This study was designed by Goldman and co-investigator Maxine Weinstein at Georgetown University to evaluate the linkages among the social environment, stress and health. Beginning in 2000, SEBAS conducted extensive home interviews, collected biological specimens and administered medical examinations with middle-aged and older adults in Taiwan. Goldman and Todd used the 2006 wave of this study, which included both interviewer and physician assessments, for their analysis. They also included death registration data through 2011 to ascertain the survival status of those interviewed.  

The survey used in the study included detailed questions regarding participants’ health conditions and social environment. Participants’ physical functioning was evaluated through tasks that determined, for example, their walking speed and grip strength. Health assessments were elicited from participants, interviewers and physicians on identical five-point scales by asking “Regarding your/the respondent’s current state of health, do you feel it is excellent (5), good (4), average (3), not so good (2) or poor (1)?”

Participants answered this question near the beginning of the interview, before other health questions were asked. Interviewers assessed the participants’ health at the end of the survey, after administering the questionnaire and evaluating participants’ performance on a set of tasks, such as walking a short distance and getting up and down from a chair. And physicians — who were hired by the study and were not the participants’ primary care physicians — provided their assessments after physical exams and reviews of the participants’ medical histories. (Study investigators did not provide special guidance about how to rate overall health to any group.)

In order to understand the many variables that go into predicting mortality, Goldman and Todd factored into their statistical models such socio-demographic variables as sex, place of residence, education, marital status, and participation in social activities. They also considered chronic conditions, psychological wellbeing (such as depressive symptoms) and physical functioning to account for a fuller picture of health.

"Mortality is easy to measure because we have death records indicating when a person has died," Goldman said. "Overall health, on the other hand, is very complicated to measure but obviously very important for addressing health policy issues."

Two unexpected results emerged from Goldman and Todd’s analysis. The first: physicians’ ratings proved to be weak predictors of survival. “The physicians performed a medical exam equivalent to an annual physical exam, plus an abdominal ultrasound; they have specialized knowledge regarding health conditions,” Goldman explained. “Given access to such information, we anticipated stronger, more accurate predictions of death,” she said. “These results call into question previous studies’ assumptions that physicians’ ‘objective health’ ratings are superior to ‘subjective’ ratings provided by the survey participants themselves.”

In a second surprising finding, the team found that interviewers’ ratings were considerably more powerful for predicting mortality than self-ratings. This is likely, Goldman said, because interviewers considered respondents’ movements, appearance and responsiveness in addition to the detailed health information gathered during the interviews. Also, Goldman posits, interviewer ratings are probably less affected by bias than self-reports. 

"The ‘self-rated health’ question is religiously used by health researchers and social scientists, and, although it has been shown to predict mortality, it suffers from many biases. People use it because it’s easy and simple,” Goldman continued. "But the problem with self-rated health is that we have no idea what reference group the respondent is using when evaluating his or her own health. Different ethnic and racial groups respond differently as do varying socioeconomic groups. We need other simple ways to rate individual health instead of relying so heavily on self-rated health."

One way, Goldman suggests, is by including interviewer ratings in surveys along with self-ratings: “This is a straightforward and cost-free addition to a questionnaire that is likely to improve our measurement of health in any population,” Goldman said.

(Source: wws.princeton.edu)

Filed under mortality health facial expressions physicians psychology neuroscience science

141 notes

The pig, the fish and the jellyfish: Tracing nervous disorders in humans
What do pigs, jellyfish and zebrafish have in common? It might be hard to discern the connection, but the different species are all pieces in a puzzle. A puzzle which is itself part of a larger picture of solving the riddles of diseases in humans.
The pig, the jellyfish and the zebrafish are being used by scientists at Aarhus University to, among other things, gain a greater understanding of hereditary forms of diseases affecting the nervous system. This can be disorders like Parkinson’s disease, Alzheimer’s disease, autism, epilepsy and the motor neurone disease ALS.
In a project, which has just finished, the scientists have focussed on a specific gene in pigs. The gene, SYN1, encodes the protein synapsin, which is involved in communication between nerve cells. Synapsin almost exclusively occurs in nerve cells in the brain. Parts of the gene can thus be used to control an expression of genes connected to hereditary versions of the aforementioned disorders.
The pigThe SYN1 gene can, with its specific expression in nerve cells, be used for generation of pig models of neurodegenerative diseases like Parkinson’s. The reason scientists bring a pig into the equation is that the pig is well suited as a model for investigating human diseases.
- Pigs are very like humans in their size, genetics, anatomy and physiology. There are plenty of them, so they are easily obtainable for research purposes, and it is ethically easier to use them than, for example, apes, says senior scientist Knud Larsen from Aarhus University.
Before the gene was transferred from humans to pigs, the scientists had to ensure that the SYN1 gene was only expressed in nerve cells. This was where the zebra fish entered the equation.
The zebrafish and the jellyfish- The zebrafish is, as a model organism, the darling of researchers, because it is transparent and easy to genetically modify. We thus attached the relevant gene, SYN1, to a gene from a jellyfish (GFP), and put it into a zebrafish in order to test the specificity of the gene, explains Knud Larsen.
This is because jellyfish contain a gene that enables them to light up. This gene was transferred to the zebrafish alongside SYN1, so that the scientists could follow where in the fish activity occurred as a result of the SYN1 gene.
- We could clearly see that the transparent zebrafish shone green in its nervous system as a result of the SYN1 gene from humans initiating processes in the nervous system. We could thus conclude that SYN1 works specifically in nerve cells, says Knud Larsen.
The results of this investigation pave the way for the SYN1 gene being used in pig models for research into human diseases. The pig with the human gene SYN1 can presumably also be used for research into the development of the brain and nervous system in the foetus.
- I think it is interesting that the nervous system is so well preserved, from an evolutionary point of view, that you can observe a nerve-cell-specific expression of a pig gene in a zebrafish. It is impressive that something that works in a pig also works in a fish, says Knud Larsen.
Read the scientific article here.

The pig, the fish and the jellyfish: Tracing nervous disorders in humans

What do pigs, jellyfish and zebrafish have in common? It might be hard to discern the connection, but the different species are all pieces in a puzzle. A puzzle which is itself part of a larger picture of solving the riddles of diseases in humans.

The pig, the jellyfish and the zebrafish are being used by scientists at Aarhus University to, among other things, gain a greater understanding of hereditary forms of diseases affecting the nervous system. This can be disorders like Parkinson’s disease, Alzheimer’s disease, autism, epilepsy and the motor neurone disease ALS.

In a project, which has just finished, the scientists have focussed on a specific gene in pigs. The gene, SYN1, encodes the protein synapsin, which is involved in communication between nerve cells. Synapsin almost exclusively occurs in nerve cells in the brain. Parts of the gene can thus be used to control an expression of genes connected to hereditary versions of the aforementioned disorders.

The pig
The SYN1 gene can, with its specific expression in nerve cells, be used for generation of pig models of neurodegenerative diseases like Parkinson’s. The reason scientists bring a pig into the equation is that the pig is well suited as a model for investigating human diseases.

- Pigs are very like humans in their size, genetics, anatomy and physiology. There are plenty of them, so they are easily obtainable for research purposes, and it is ethically easier to use them than, for example, apes, says senior scientist Knud Larsen from Aarhus University.

Before the gene was transferred from humans to pigs, the scientists had to ensure that the SYN1 gene was only expressed in nerve cells. This was where the zebra fish entered the equation.

The zebrafish and the jellyfish
- The zebrafish is, as a model organism, the darling of researchers, because it is transparent and easy to genetically modify. We thus attached the relevant gene, SYN1, to a gene from a jellyfish (GFP), and put it into a zebrafish in order to test the specificity of the gene, explains Knud Larsen.

This is because jellyfish contain a gene that enables them to light up. This gene was transferred to the zebrafish alongside SYN1, so that the scientists could follow where in the fish activity occurred as a result of the SYN1 gene.

- We could clearly see that the transparent zebrafish shone green in its nervous system as a result of the SYN1 gene from humans initiating processes in the nervous system. We could thus conclude that SYN1 works specifically in nerve cells, says Knud Larsen.

The results of this investigation pave the way for the SYN1 gene being used in pig models for research into human diseases. The pig with the human gene SYN1 can presumably also be used for research into the development of the brain and nervous system in the foetus.

- I think it is interesting that the nervous system is so well preserved, from an evolutionary point of view, that you can observe a nerve-cell-specific expression of a pig gene in a zebrafish. It is impressive that something that works in a pig also works in a fish, says Knud Larsen.

Read the scientific article here.

Filed under nervous system neurodegenerative diseases synapsin zebrafish nerve cells neuroscience science

free counters