Neuroscience

Articles and news from the latest research reports.

153 notes

Study Shows Where Alzheimer’s Starts and How It Spreads
Using high-resolution functional MRI (fMRI) imaging in patients with Alzheimer’s disease and in mouse models of the disease, Columbia University Medical Center (CUMC) researchers have clarified three fundamental issues about Alzheimer’s: where it starts, why it starts there, and how it spreads. In addition to advancing understanding of Alzheimer’s, the findings could improve early detection of the disease, when drugs may be most effective. The study was published today in the online edition of the journal Nature Neuroscience.
“It has been known for years that Alzheimer’s starts in a brain region known as the entorhinal cortex,” said co-senior author Scott A. Small, MD, Boris and Rose Katz Professor of Neurology, professor of radiology, and director of the Alzheimer’s Disease Research Center. “But this study is the first to show in living patients that it begins specifically in the lateral entorhinal cortex, or LEC. The LEC is considered to be a gateway to the hippocampus, which plays a key role in the consolidation of long-term memory, among other functions. If the LEC is affected, other aspects of the hippocampus will also be affected.”
The study also shows that, over time, Alzheimer’s spreads from the LEC directly to other areas of the cerebral cortex, in particular, the parietal cortex, a brain region involved in various functions, including spatial orientation and navigation. The researchers suspect that Alzheimer’s spreads “functionally,” that is, by compromising the function of neurons in the LEC, which then compromises the integrity of neurons in adjoining areas.
A third major finding of the study is that LEC dysfunction occurs when changes in tau and amyloid precursor protein (APP) co-exist. “The LEC is especially vulnerable to Alzheimer’s because it normally accumulates tau, which sensitizes the LEC to the accumulation of APP. Together, these two proteins damage neurons in the LEC, setting the stage for Alzheimer’s,” said co-senior author Karen E. Duff, PhD, professor of pathology and cell biology (in psychiatry and in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain) at CUMC and at the New York State Psychiatric Institute.
In the study, the researchers used a high-resolution variant of fMRI to map metabolic defects in the brains of 96 adults enrolled in the Washington Heights-Inwood Columbia Aging Project (WHICAP). All of the adults were free of dementia at the time of enrollment.
“Dr. Richard Mayeux’s WHICAP study enables us to follow a large group of healthy elderly individuals, some of whom have gone on to develop Alzheimer’s disease,” said Dr. Small. “This study has given us a unique opportunity to image and characterize patients with Alzheimer’s in its earliest, preclinical stage.”
The 96 adults were followed for an average of 3.5 years, at which time 12 individuals were found to have progressed to mild Alzheimer’s disease. An analysis of the baseline fMRI images of those 12 individuals found significant decreases in cerebral blood volume (CBV) — a measure of metabolic activity — in the LEC compared with that of the 84 adults who were free of dementia.
A second part of the study addressed the role of tau and APP in LEC dysfunction. While previous studies have suggested that entorhinal cortex dysfunction is associated with both tau and APP abnormalities, it was not known how these proteins interact to drive this dysfunction, particularly in preclinical Alzheimer’s.
To answer this question, explained first author Usman Khan, an MD-PhD student based in Dr. Small’s lab, the team created three mouse models, one with elevated levels of tau in the LEC, one with elevated levels of APP, and one with elevated levels of both proteins. The researchers found that the LEC dysfunction occurred only in the mice with both tau and APP.
The study has implications for both research and treatment. “Now that we’ve pinpointed where Alzheimer’s starts, and shown that those changes are observable using fMRI, we may be able to detect Alzheimer’s at its earliest preclinical stage, when the disease might be more treatable and before it spreads to other brain regions,” said Dr. Small. In addition, say the researchers, the new imaging method could be used to assess the efficacy of promising Alzheimer’s drugs during the disease’s early stages.

Study Shows Where Alzheimer’s Starts and How It Spreads

Using high-resolution functional MRI (fMRI) imaging in patients with Alzheimer’s disease and in mouse models of the disease, Columbia University Medical Center (CUMC) researchers have clarified three fundamental issues about Alzheimer’s: where it starts, why it starts there, and how it spreads. In addition to advancing understanding of Alzheimer’s, the findings could improve early detection of the disease, when drugs may be most effective. The study was published today in the online edition of the journal Nature Neuroscience.

“It has been known for years that Alzheimer’s starts in a brain region known as the entorhinal cortex,” said co-senior author Scott A. Small, MD, Boris and Rose Katz Professor of Neurology, professor of radiology, and director of the Alzheimer’s Disease Research Center. “But this study is the first to show in living patients that it begins specifically in the lateral entorhinal cortex, or LEC. The LEC is considered to be a gateway to the hippocampus, which plays a key role in the consolidation of long-term memory, among other functions. If the LEC is affected, other aspects of the hippocampus will also be affected.”

The study also shows that, over time, Alzheimer’s spreads from the LEC directly to other areas of the cerebral cortex, in particular, the parietal cortex, a brain region involved in various functions, including spatial orientation and navigation. The researchers suspect that Alzheimer’s spreads “functionally,” that is, by compromising the function of neurons in the LEC, which then compromises the integrity of neurons in adjoining areas.

A third major finding of the study is that LEC dysfunction occurs when changes in tau and amyloid precursor protein (APP) co-exist. “The LEC is especially vulnerable to Alzheimer’s because it normally accumulates tau, which sensitizes the LEC to the accumulation of APP. Together, these two proteins damage neurons in the LEC, setting the stage for Alzheimer’s,” said co-senior author Karen E. Duff, PhD, professor of pathology and cell biology (in psychiatry and in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain) at CUMC and at the New York State Psychiatric Institute.

In the study, the researchers used a high-resolution variant of fMRI to map metabolic defects in the brains of 96 adults enrolled in the Washington Heights-Inwood Columbia Aging Project (WHICAP). All of the adults were free of dementia at the time of enrollment.

“Dr. Richard Mayeux’s WHICAP study enables us to follow a large group of healthy elderly individuals, some of whom have gone on to develop Alzheimer’s disease,” said Dr. Small. “This study has given us a unique opportunity to image and characterize patients with Alzheimer’s in its earliest, preclinical stage.”

The 96 adults were followed for an average of 3.5 years, at which time 12 individuals were found to have progressed to mild Alzheimer’s disease. An analysis of the baseline fMRI images of those 12 individuals found significant decreases in cerebral blood volume (CBV) — a measure of metabolic activity — in the LEC compared with that of the 84 adults who were free of dementia.

A second part of the study addressed the role of tau and APP in LEC dysfunction. While previous studies have suggested that entorhinal cortex dysfunction is associated with both tau and APP abnormalities, it was not known how these proteins interact to drive this dysfunction, particularly in preclinical Alzheimer’s.

To answer this question, explained first author Usman Khan, an MD-PhD student based in Dr. Small’s lab, the team created three mouse models, one with elevated levels of tau in the LEC, one with elevated levels of APP, and one with elevated levels of both proteins. The researchers found that the LEC dysfunction occurred only in the mice with both tau and APP.

The study has implications for both research and treatment. “Now that we’ve pinpointed where Alzheimer’s starts, and shown that those changes are observable using fMRI, we may be able to detect Alzheimer’s at its earliest preclinical stage, when the disease might be more treatable and before it spreads to other brain regions,” said Dr. Small. In addition, say the researchers, the new imaging method could be used to assess the efficacy of promising Alzheimer’s drugs during the disease’s early stages.

Filed under alzheimer's disease entorhinal cortex aging memory dementia cognitive decline neuroscience science

538 notes

Why Do Our Brains Sometime Mess Up Simple Calculations?

If the human brain is comparable to a computer, why does it so often make mistakes that its electronic counterpart does not? New research suggests it all has to do with how various problems are presented.
Scientists typically like to make this comparison because both the human brain and a computer typically follow a set of rules in which to make decisions, communicate and perform other tasks. However, University of Wisconsin-Madison cognitive scientist and psychology professor Gary Lupyan said people can get tripped up on even the simplest logic problems because they get caught up in contextual information.
For example, even a simple challenge like determining whether or not a number is odd or even can be tricky, under the right circumstances. Lupyan said that there is a significant minority of people, even if they are well-educated, that can mistake a number such as 798 for an odd number – because, even though deep down we know that only the last number is used to determine whether it is even or odd, we can be fooled by the presence of two odd numbers.
“Most of us would attribute an error like that to carelessness, or not paying attention, but some errors may appear more often because our brains are not as well equipped to solve purely rule-based problems,” the professor, whose work appears in a recent edition of the journal Cognition, explained in a statement Friday.
In multiple trials involving such tasks as sorting numbers, shapes and even people into easy categories like evens, triangles and grandmothers, Lupyan found study participants often broke simple rules based on context.
For instance, when asked to consider a contest that was only open to grandmothers and that each eligible individual had an equal chance of winning, the subjects believed a 68-year-old woman with six grandchildren was more likely to emerge victorious than a 39-year-old female with one single, newborn grandchild.
“Even though people can articulate the rules, they can’t help but be influenced by perceptual details,” he explained. “Thinking of triangles tends to involve thinking of typical, equilateral sorts of triangles. It is difficult to focus on just the rules that make a shape a triangle, regardless of what it looks like exactly.”
Lupyan said that in many cases, not only is overlooking these types of rules overly detrimental, but doing so can actually be beneficial when it comes to evaluating unfamiliar things. The lone exception, he said, is when it comes to mathematics, where rules are unequivocally necessary in order to achieve a successful outcome.
“After all, although some people may mistakenly think that 798 is an odd number, not only can people follow such rules – though not always perfectly – we are capable of building computers that can execute such rules perfectly,” Lupyan said. “That itself required very precise, mathematical cognition. A big question is where this ability comes from and why some people are better at formal rules than other people.”
He added this issue could be especially important to math and science teachers: “Students approach learning with biases shaped both by evolution and day-to-day experience. Rather than treating errors as reflecting lack of knowledge or as inattention, trying to understand their source may lead to new ways of teaching rule-based systems while making use of the flexibility and creative problem solving at which humans excel.”

Why Do Our Brains Sometime Mess Up Simple Calculations?

If the human brain is comparable to a computer, why does it so often make mistakes that its electronic counterpart does not? New research suggests it all has to do with how various problems are presented.

Scientists typically like to make this comparison because both the human brain and a computer typically follow a set of rules in which to make decisions, communicate and perform other tasks. However, University of Wisconsin-Madison cognitive scientist and psychology professor Gary Lupyan said people can get tripped up on even the simplest logic problems because they get caught up in contextual information.

For example, even a simple challenge like determining whether or not a number is odd or even can be tricky, under the right circumstances. Lupyan said that there is a significant minority of people, even if they are well-educated, that can mistake a number such as 798 for an odd number – because, even though deep down we know that only the last number is used to determine whether it is even or odd, we can be fooled by the presence of two odd numbers.

“Most of us would attribute an error like that to carelessness, or not paying attention, but some errors may appear more often because our brains are not as well equipped to solve purely rule-based problems,” the professor, whose work appears in a recent edition of the journal Cognition, explained in a statement Friday.

In multiple trials involving such tasks as sorting numbers, shapes and even people into easy categories like evens, triangles and grandmothers, Lupyan found study participants often broke simple rules based on context.

For instance, when asked to consider a contest that was only open to grandmothers and that each eligible individual had an equal chance of winning, the subjects believed a 68-year-old woman with six grandchildren was more likely to emerge victorious than a 39-year-old female with one single, newborn grandchild.

“Even though people can articulate the rules, they can’t help but be influenced by perceptual details,” he explained. “Thinking of triangles tends to involve thinking of typical, equilateral sorts of triangles. It is difficult to focus on just the rules that make a shape a triangle, regardless of what it looks like exactly.”

Lupyan said that in many cases, not only is overlooking these types of rules overly detrimental, but doing so can actually be beneficial when it comes to evaluating unfamiliar things. The lone exception, he said, is when it comes to mathematics, where rules are unequivocally necessary in order to achieve a successful outcome.

“After all, although some people may mistakenly think that 798 is an odd number, not only can people follow such rules – though not always perfectly – we are capable of building computers that can execute such rules perfectly,” Lupyan said. “That itself required very precise, mathematical cognition. A big question is where this ability comes from and why some people are better at formal rules than other people.”

He added this issue could be especially important to math and science teachers: “Students approach learning with biases shaped both by evolution and day-to-day experience. Rather than treating errors as reflecting lack of knowledge or as inattention, trying to understand their source may lead to new ways of teaching rule-based systems while making use of the flexibility and creative problem solving at which humans excel.”

Filed under decision making perception mental representations human algorithms neuroscience science

430 notes

Childhood bullying shown to increase likelihood of psychotic experiences in later life
New research has shown that being exposed to bullying during childhood will lead to an increased risk of psychotic experiences in adulthood, regardless of whether they are victims or perpetrators.
The study, published today in Psychological Medicine, assessed a cohort of UK children (ALSPAC) from birth to fully understand the extent of bullying on psychosis in later life – with some groups showing to be almost five times more likely to suffer from episodes at the age of 18.
The analysis, led by researchers from the University of Warwick, in association with colleagues at the University of Bristol, shows that victims, perpetrators and those who are both bullies and victims (bully-victims), are at an increased risk of developing psychotic experiences.
Even when controlling for external factors such as family factors or pre-existing behaviour problems, the study found that not only those children who were bullied over a number of years (chronic victims), but also the bullies themselves in primary school were up to four and a half times more likely to have suffered from psychotic experiences by the age of 18. Equally concerning is that those children who only experienced bullying for brief periods (e.g. at 8 or 10 years of age) were at increased risk for psychotic experiences.
The term ‘psychotic experiences’ covers a range of experiences, from hearing voices and seeing things that are not there to paranoia. These experiences, if persistent, are highly distressing and disruptive to everyday life. They are diagnosed by GPs or psychiatrists as “psychotic disorders” such as schizophrenia. Exact diagnosis is difficult and requires careful assessment as in this study.
Professor Dieter Wolke of the University of Warwick explained, “We want to eradicate the myth that bullying at a young age could be viewed as a harmless rite of passage that everyone goes through – it casts a long shadow over a person’s life and can have serious consequences for mental health”
“These numbers show exactly how much childhood bullying can impact on psychosis in adult life. It  strengthens on the evidence base that reducing bullying in childhood could substantially reduce mental health problems. The benefit to society would be huge, but of course, the greatest benefit would be to the individual.”
When controlling for external factors such as family factors or pre-existing behaviour problems, the study found that not only those children who were bullied over a number of years (chronic victims), but also the bullies themselves in primary school were up to four and a half times more likely to have suffered from psychotic experiences by the age of 18. Equally concerning is that those children who only experienced bullying for brief periods (e.g. at 8 or 10 years of age) were at increased risk for psychotic experiences.
Wolke’s team have previously looked at the impact of bullying on psychotic symptoms in 12 year olds, and there have been a range of short term studies that confirm the relation between being a victim of bullying and psychotic symptoms. This study, however, is the first to report the long term impact of being involved in bullying during childhood - whether victim, bully or bully-victim – on psychotic experiences in late adolescence or adulthood.
Professor Wolke added, “The results show that interventions against bullying should start early, in primary school, to prevent long term serious effects on children’s mental health. This clearly isn’t something that can wait until secondary school to be resolved; the damage may already have been done.”

Childhood bullying shown to increase likelihood of psychotic experiences in later life

New research has shown that being exposed to bullying during childhood will lead to an increased risk of psychotic experiences in adulthood, regardless of whether they are victims or perpetrators.

The study, published today in Psychological Medicine, assessed a cohort of UK children (ALSPAC) from birth to fully understand the extent of bullying on psychosis in later life – with some groups showing to be almost five times more likely to suffer from episodes at the age of 18.

The analysis, led by researchers from the University of Warwick, in association with colleagues at the University of Bristol, shows that victims, perpetrators and those who are both bullies and victims (bully-victims), are at an increased risk of developing psychotic experiences.

Even when controlling for external factors such as family factors or pre-existing behaviour problems, the study found that not only those children who were bullied over a number of years (chronic victims), but also the bullies themselves in primary school were up to four and a half times more likely to have suffered from psychotic experiences by the age of 18. Equally concerning is that those children who only experienced bullying for brief periods (e.g. at 8 or 10 years of age) were at increased risk for psychotic experiences.

The term ‘psychotic experiences’ covers a range of experiences, from hearing voices and seeing things that are not there to paranoia. These experiences, if persistent, are highly distressing and disruptive to everyday life. They are diagnosed by GPs or psychiatrists as “psychotic disorders” such as schizophrenia. Exact diagnosis is difficult and requires careful assessment as in this study.

Professor Dieter Wolke of the University of Warwick explained, “We want to eradicate the myth that bullying at a young age could be viewed as a harmless rite of passage that everyone goes through – it casts a long shadow over a person’s life and can have serious consequences for mental health”

“These numbers show exactly how much childhood bullying can impact on psychosis in adult life. It  strengthens on the evidence base that reducing bullying in childhood could substantially reduce mental health problems. The benefit to society would be huge, but of course, the greatest benefit would be to the individual.”

When controlling for external factors such as family factors or pre-existing behaviour problems, the study found that not only those children who were bullied over a number of years (chronic victims), but also the bullies themselves in primary school were up to four and a half times more likely to have suffered from psychotic experiences by the age of 18. Equally concerning is that those children who only experienced bullying for brief periods (e.g. at 8 or 10 years of age) were at increased risk for psychotic experiences.

Wolke’s team have previously looked at the impact of bullying on psychotic symptoms in 12 year olds, and there have been a range of short term studies that confirm the relation between being a victim of bullying and psychotic symptoms. This study, however, is the first to report the long term impact of being involved in bullying during childhood - whether victim, bully or bully-victim – on psychotic experiences in late adolescence or adulthood.

Professor Wolke added, “The results show that interventions against bullying should start early, in primary school, to prevent long term serious effects on children’s mental health. This clearly isn’t something that can wait until secondary school to be resolved; the damage may already have been done.”

Filed under bullying psychosis child development mental health psychology neuroscience science

133 notes

Two-way traffic in the spinal cord
The progress a baby makes in the first year of life is amazing: a newborn can only wave its arms and legs about randomly, but not so long after the baby can reach out and pick up a crumb from the carpet. What happens in the nervous system that enables this change from random waving to finely coordinated movement? Scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich, working with colleagues from New York and Philadelphia, have described a new type of nerve cell in mice which provides a valuable insight into this developmental phenomenon. During embryonic development, the projections from these cells grow from the spinal cord towards the brain. They may pave the way for other nerve cells which control voluntary movement and which only grow from the brain into the spinal cord after birth.
When we reach out towards an object with our hand or push our foot into a boot, our movements are coordinated and controlled by the brain. For this to be possible there must be a neural pathway for the brain to transmit instructions, for example to the foot; and also in the reverse direction, for stimuli from the surroundings of the foot to be passed back to the brain. Such neural pathways are formed when the projections (axons) grow out from nerve cells during development. Depending on the organism and the body part to be connected, the axons can grow to many centimetres in length. Rüdiger Klein and his team at the Max Planck Institute of Neurobiology investigate how the axons navigate through the body, and which molecules play a part in their pathfinding. In particular, the scientists have been focusing on the signalling molecules known as ephrins and their binding partners, the Eph receptors. Ephrins and Eph receptors are located on the surface of nerve cells, among other places, and help the growing cells find their way and locate their partner cells.
Some time ago, Rüdiger Klein and his team discovered in the mouse that ephrins and Eph receptors play a key role in the development of the neural networks which control our movements. The neurobiologists have been able to demonstrate that the ephrin/Eph system guides nerve cells which, after birth, send their axons from the brain into the spinal cord and direct voluntary movement in the arms and legs. In their investigations into axons which run in the opposite direction, namely from the spinal cord into the brain, the researchers came across a new cell type which also contained Eph receptors. “Just where the ‘descending’ axons were growing, we found the ‘ascending’ axons running in parallel”, says Rüdiger Klein. “That obviously raised the question in our minds as to how this parallel growth is controlled during development.”
Subsequent research by the neurobiologists uncovered something surprising: in contrast with the known cells, the ascending axons of the new cell type did not grow only after birth, but instead already during embryonic development. Moreover, their growth was guided by the same ephrin/Eph signalling system as that involved in the growth of the descending axons. “It would seem that during embryonic development the ascending axons would ‘pre-drill’ a channel for the descending axons which do not grow out until after birth”, explains Rüdiger Klein.
Further investigations into the new, ascending nerve cells have made it clear that they obtain their input from specialised, touch-sensitive cells. A new feedback system could thus be involved here: voluntary movements are refined by signals from touch-sensitive cells, so adapting the intended movement to the environment and your foot slips into the boot. “What we found surprising is the fact that one and the same guidance system directs both the descending and the ascending axons”, says Klein. “This is a wonderful example of how a highly complex nervous system can be built up by making flexible use of individual molecules, and thus a small number of genes.” The next job for the scientists is to find out whether the suspected feedback system actually exists, i.e. whether the ascending and descending cells are connected via synapses. Their aim is to unravel step by step the developmental processes which enable the brain to coordinate sequences of movements.

Two-way traffic in the spinal cord

The progress a baby makes in the first year of life is amazing: a newborn can only wave its arms and legs about randomly, but not so long after the baby can reach out and pick up a crumb from the carpet. What happens in the nervous system that enables this change from random waving to finely coordinated movement? Scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich, working with colleagues from New York and Philadelphia, have described a new type of nerve cell in mice which provides a valuable insight into this developmental phenomenon. During embryonic development, the projections from these cells grow from the spinal cord towards the brain. They may pave the way for other nerve cells which control voluntary movement and which only grow from the brain into the spinal cord after birth.

When we reach out towards an object with our hand or push our foot into a boot, our movements are coordinated and controlled by the brain. For this to be possible there must be a neural pathway for the brain to transmit instructions, for example to the foot; and also in the reverse direction, for stimuli from the surroundings of the foot to be passed back to the brain. Such neural pathways are formed when the projections (axons) grow out from nerve cells during development. Depending on the organism and the body part to be connected, the axons can grow to many centimetres in length. Rüdiger Klein and his team at the Max Planck Institute of Neurobiology investigate how the axons navigate through the body, and which molecules play a part in their pathfinding. In particular, the scientists have been focusing on the signalling molecules known as ephrins and their binding partners, the Eph receptors. Ephrins and Eph receptors are located on the surface of nerve cells, among other places, and help the growing cells find their way and locate their partner cells.

Some time ago, Rüdiger Klein and his team discovered in the mouse that ephrins and Eph receptors play a key role in the development of the neural networks which control our movements. The neurobiologists have been able to demonstrate that the ephrin/Eph system guides nerve cells which, after birth, send their axons from the brain into the spinal cord and direct voluntary movement in the arms and legs. In their investigations into axons which run in the opposite direction, namely from the spinal cord into the brain, the researchers came across a new cell type which also contained Eph receptors. “Just where the ‘descending’ axons were growing, we found the ‘ascending’ axons running in parallel”, says Rüdiger Klein. “That obviously raised the question in our minds as to how this parallel growth is controlled during development.”

Subsequent research by the neurobiologists uncovered something surprising: in contrast with the known cells, the ascending axons of the new cell type did not grow only after birth, but instead already during embryonic development. Moreover, their growth was guided by the same ephrin/Eph signalling system as that involved in the growth of the descending axons. “It would seem that during embryonic development the ascending axons would ‘pre-drill’ a channel for the descending axons which do not grow out until after birth”, explains Rüdiger Klein.

Further investigations into the new, ascending nerve cells have made it clear that they obtain their input from specialised, touch-sensitive cells. A new feedback system could thus be involved here: voluntary movements are refined by signals from touch-sensitive cells, so adapting the intended movement to the environment and your foot slips into the boot. “What we found surprising is the fact that one and the same guidance system directs both the descending and the ascending axons”, says Klein. “This is a wonderful example of how a highly complex nervous system can be built up by making flexible use of individual molecules, and thus a small number of genes.” The next job for the scientists is to find out whether the suspected feedback system actually exists, i.e. whether the ascending and descending cells are connected via synapses. Their aim is to unravel step by step the developmental processes which enable the brain to coordinate sequences of movements.

Filed under spinal cord nerve cells embryonic development ephrins eph receptors neuroscience science

356 notes

Narcolepsy confirmed as autoimmune disease

Results also partly explain why the 2009 swine flu virus, and a vaccine against it, led to spikes in the sleep disorder.

As the H1N1 swine flu pandemic swept the world in 2009, China saw a spike in cases of narcolepsy — a mysterious disorder that involves sudden, uncontrollable sleepiness. Meanwhile, in Europe, around 1 in 15,000 children who were given Pandemrix — a now-defunct flu vaccine that contained fragments of the pandemic virus — also developed narcolepsy, a chronic disease.

image

Immunologist Elizabeth Mellins and narcolepsy researcher Emmanuel Mignot at Stanford University School of Medicine in California and their collaborators have now partly solved the mystery behind these events, while also confirming a longstanding hypothesis that narcolepsy is an autoimmune disease, in which the immune system attacks healthy cells.

Narcolepsy is mostly caused by the gradual loss of neurons that produce hypocretin, a hormone that keeps us awake. Many scientists had suspected that the immune system was responsible, but the Stanford team has found the first direct evidence: a special group of CD4+ T cells (a type of immune cell) that targets hypocretin and is found only in people with narcolepsy.

“Up till now, the idea that narcolepsy was an autoimmune disorder was a very compelling hypothesis, but this is the first direct evidence of autoimmunity,” says Mellins. “I think these cells are a smoking gun.” The study is published today in Science Translational Medicine.

Thomas Scammell, a neurologist at Harvard Medical School in Boston, Massachusetts, says that the results are welcome after “years of modest disappointment”, marked by many failures to find antibodies made by a person’s body against their own hypocretin. “It’s one of the biggest things to happen in the narcolepsy field for some time.”

Loose ends

It is not clear why some people make these T cells and others do not, but genetics may play a part. In earlier work, Mignot showed that 98% of people with narcolepsy have a variant of the gene HLA that is found in only 25% of the general population.

Environmental factors, such as infections, probably matter too. Mellins’ working model is that narcolepsy happens when people with a genetic predisposition, which involves having several narcolepsy-related gene variants, encounter an environmental factor that mimics hypocretin, triggering a response from the immune system. The 2009 H1N1 virus was one such trigger: the team found that these same special CD4+ T cells also recognize a protein from the pandemic H1N1 virus.

Narcolepsy of course was around long before the 2009 pandemic. And since new cases of the disease tend to arise right after winter — following the seasonal peak in flu — it’s possible that other strains or even other viruses are involved, too.

But the results do not fully explain the Pandemrix mystery, because other flu vaccines contained the same proteins but did not lead to a spike in narcolepsy cases. Regardless, Mellins says that it should be possible to avoid repeating the same mistake by ensuring that future flu vaccines do not contain components that resemble hypocretin.

Another loose end is that “they don’t show how these T cells are actually killing the hypocretin neurons”, adds Scammell. “It’s like a murder mystery and we don’t know who the real killer is.” He thinks that it is unlikely that the T cells are the true culprits; instead, they could be acting through an intermediary, or might merely be a symptom of some other destructive event.

“The results are very important, but they need to do a replication study in a large group of patients and controls,” says Gert Lammers, a neurologist at Leiden University Medical Center in the Netherlands and president of the European Narcolepsy Network. “If the findings are confirmed, the first important spin-off might be the development of a new diagnostic test.”

Filed under narcolepsy immune system sleep disorders hypocretin genes genetics neuroscience science

126 notes

Mapping objects in the brain
The ability to recognize objects in the environment is mediated by the brain’s ability to integrate and process massive amounts of visual information. A research group led by Takayuki Sato and Manabu Tanifuji from the RIKEN Brain Science Institute has now discovered that in macaque monkeys, this remarkable ability is supported by mosaic-like structures in the anterior inferior temporal (IT) cortex, where localized clusters of neurons encode different visual features in an organized hierarchy. 
Two competing models have been proposed to explain the functional organization of brain regions that underlies object recognition in primates. One model states that discrete brain ‘modules’ process stimuli from particular categories, such as faces, with object recognition arising from communication among the modules. The other model postulates that the visual cortex extracts generic features, which are then composited to recognize specific objects. Since both models are based on measurements of functional signals produced by metabolic changes associated with neural activity rather than measurements of the neuronal activity itself, the precise underlying mechanism responsible for object recognition has remained unclear.
To resolve this debate, the researchers undertook dense electrophysiological mapping of neural activity in anesthetized macaque monkeys exposed to a series of color images from different object categories: faces, hands, bodies, food and various other objects. Sato and his colleagues directly recorded neuronal activity from multiple locations within the anterior IT cortex, which allowed them to track the location of neurons that responded to a particular object category.
The team found that some regions responded best to faces and others to monkey bodies. While there were also regions that responded worst to faces, none appeared to respond preferentially to hands, food or manufactured items.
Interestingly, small neuron clusters within a region appeared to be selective to different facial features, responding differently to human and monkey faces and to scrambled and normal faces. This indicates that a region in the anterior IT cortex that is selective for an object category consists of smaller-scale neuron clusters that are selective for particular visual features.
“The cortical mosaics that encode visual information seem to be efficient functional structures where object-category information and information about constituent features are represented within the limited space of the brain,” explains Sato. “This could also be the way that the brain organizes information in other sensory modalities, such as hearing.” If the results are also found to extend to humans, they may offer insight into the visual recognition of objects and the development of language.

Mapping objects in the brain

The ability to recognize objects in the environment is mediated by the brain’s ability to integrate and process massive amounts of visual information. A research group led by Takayuki Sato and Manabu Tanifuji from the RIKEN Brain Science Institute has now discovered that in macaque monkeys, this remarkable ability is supported by mosaic-like structures in the anterior inferior temporal (IT) cortex, where localized clusters of neurons encode different visual features in an organized hierarchy. 

Two competing models have been proposed to explain the functional organization of brain regions that underlies object recognition in primates. One model states that discrete brain ‘modules’ process stimuli from particular categories, such as faces, with object recognition arising from communication among the modules. The other model postulates that the visual cortex extracts generic features, which are then composited to recognize specific objects. Since both models are based on measurements of functional signals produced by metabolic changes associated with neural activity rather than measurements of the neuronal activity itself, the precise underlying mechanism responsible for object recognition has remained unclear.

To resolve this debate, the researchers undertook dense electrophysiological mapping of neural activity in anesthetized macaque monkeys exposed to a series of color images from different object categories: faces, hands, bodies, food and various other objects. Sato and his colleagues directly recorded neuronal activity from multiple locations within the anterior IT cortex, which allowed them to track the location of neurons that responded to a particular object category.

The team found that some regions responded best to faces and others to monkey bodies. While there were also regions that responded worst to faces, none appeared to respond preferentially to hands, food or manufactured items.

Interestingly, small neuron clusters within a region appeared to be selective to different facial features, responding differently to human and monkey faces and to scrambled and normal faces. This indicates that a region in the anterior IT cortex that is selective for an object category consists of smaller-scale neuron clusters that are selective for particular visual features.

“The cortical mosaics that encode visual information seem to be efficient functional structures where object-category information and information about constituent features are represented within the limited space of the brain,” explains Sato. “This could also be the way that the brain organizes information in other sensory modalities, such as hearing.” If the results are also found to extend to humans, they may offer insight into the visual recognition of objects and the development of language.

Filed under brain mapping inferior temporal cortex object recognition neural activity neuroscience science

127 notes

New study reveals insight into how the brain processes shape and color
A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.
Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.
Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.
"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.
"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."
They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”
"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."
As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”
The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

New study reveals insight into how the brain processes shape and color

A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.

Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.

Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.

"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.

"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."

They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”

"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."

As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”

The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

Filed under inferior temporal cortex visual processing object recognition neuroimaging neuroscience science

147 notes

Musical brain-reading sheds light on neural processing of music

Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related to tone color and rhythm of the music change over time, and recognizes which piece of music is being listened to. The method also allows pinpointing the areas in the brain that are most crucial for the processing of music. The study was published in the journal NeuroImage.

image

Using functional magnetic resonance imaging (fMRI), the research team at the Finnish Centre of Excellence in Interdisciplinary Music Research in the Universities of Jyväskylä and Helsinki, and the Center for Functionally Integrative Neuroscience in Aarhus University, Denmark, recorded the brain responses of participants while they were listening to a 16-minute excerpt of the album Abbey Road by the Beatles. Following this, they used computational algorithms to extract a collection of musical features from the musical recording. Subsequently, they employed a collection of machine-learning methods to train a computer model that predicts how the features of the music change over time. Finally, they develop a classifier that predicts which part of the music the participant was listening to at each time.

The researchers found that most of the musical features included in the study could be reliably predicted from the brain data. They also found that the piece being listened to could be predicted significantly better than chance. Fairly large differences were however found between participants in terms of the prediction accuracy. An interesting finding was that areas outside of the auditory cortex, including motor, limbic, and frontal areas, had to be included in the models to obtain reliable predictions, providing thus evidence for the important role of these areas in the processing of musical features.

"We believe that decoding provides a method that complements other existing methods to obtain more reliable information about the complex processing of music in the brain", says Professor Petri Toiviainen from the University of Jyväskylä. "Our results provide additional evidence for the important involvement of emotional and motor areas in music processing."

(Source: jyu.fi)

Filed under auditory cortex neuroimaging music emotion neuroscience science

233 notes

The logistics of learning

Learning requires constant reconfiguration of the connections between nerve cells. Two new studies now yield new insights into the molecular mechanisms that underlie the learning process.

image

Learning and memory are made possible by the incessant reorganization of nerve connections in the brain. Both processes are based on targeted modifications of the functional interfaces between nerve cells – the so-called synapses – which alter their form, molecular composition and functional properties. In effect, connections between cells that are frequently co-activated together are progressively altered so that they respond to subsequent signals more rapidly and more strongly. This way, information can be encoded in patterns of synaptic activity and promptly recalled when needed. The converse is also true: learned behaviors can be lost by disuse, because inactive synapses are themselves less likely to transmit an incoming impulse, leading to the decay of such connections.

How exactly an individual synapse is altered without simultaneously affecting nearby nerve cells or other synapses on the same cell is a question that is central to Michael Kiebler’s research. Kiebler, a biochemist, holds the Chair of Cell Biology in the Faculty of Medicine at LMU. “It is now clear that the changes take place in the cell that is stimulated by synaptic input – the post-synaptic cell – and in particular in its so-called dendritic spines,” he says, “and particles that are known as “neuronal RNA granules” deliver mRNA molecules to these sites“. These mRNAs represent the blueprints for the synthesis of the proteins responsible for reconfiguring the synapses. Kiebler‘s team has developed a model, which postulates that these granules migrate from dendrite to dendrite, and release their mRNAs specifically at sites that are repeatedly activated. This would ensure that the relevant proteins are synthesized only where they are needed within the cell.

In spite of the potential significance of the model, the molecular mechanisms required for its realization have remained obscure. mRNA-binding proteins, including Staufen2 (Stau2) and Barentsz, are essential components of the granules, and Kiebler’s team, in collaboration with Giulio Superti-Furga’s group (CeMM, Vienna), have now used specific antibodies to isolate and characterize neuronal granules that contain either Stau2 or Barentsz.

Surprising diversity

It has generally been assumed that all neuronal RNA granules have essentially similar compositions. However, the new findings indicate that this is not the case. A comparison between Stau2- and Barentsz-containing granules reveals that they differ in about two-thirds of their proteins. “This suggests that the RNA granules are highly heterogeneous and dynamic in their composition,” says Kiebler. “And that makes sense to me, because it would mean that the granules can perform different functions depending on which mRNAs they carry.” Furthermore, the researchers have shown that the granules contain virtually none of the factors known to promote the translation of mRNAs into proteins. On the contrary, they include many molecules that repress protein synthesis. This in turn implies that the process of mRNA transport is uncoupled from the subsequent production of the proteins they encode.

In a complementary study, Kiebler’s team also characterized the mRNA cargoes associated with the granules. “Until now, none of the RNA molecules present in Stau2-containing granules in mammalian nerve cells had been defined, but we have now been able to identify many specific mRNAs,” Kiebler explains. Further experiments revealed that Stau2 stabilizes the mRNAs, allowing them to be used more often for the production of proteins. Moreover, the researchers have shown that specialized structures within these mRNAs, called “Staufen-Recognized Structures” (SRS), are essential for their recognition and stabilization by Stau2. “This allows us to propose a molecular mechanism for RNA recognition for the first time,” says Kiebler.

Taken together, the two new papers (1, 2) provide novel insights into the molecular mechanisms that underlie learning and memory. The scientists now want to dissect out the details in future studies. “In the long term, we are particularly interested in the question of how an activated synapse can alter the state of the granules and induce the production of protein,” Kiebler notes. It is becoming increasingly clear that RNA-binding proteins play essential roles in nerve cells. Disruption of their action can lead to neurodegenerative diseases and neurological dysfunction. Clearly, not only classical conditions such as Alzheimer‘s or Parkinson’s disease, in which RNA-binding proteins are always involved, but also cognitive defects or age-associated impairment of learning ability must be viewed in this context,” Kiebler concludes.

(Source: en.uni-muenchen.de)

Filed under neurodegenerative diseases memory learning neurons synapses protein synthesis neuroscience science

70 notes

Anti-epilepsy drugs can cause inflammations

Physicians at the Ruhr-Universität Bochum (RUB) have been investigating if established anti-epilepsy drugs have anti-inflammatory or pro-inflammatory properties – an effect for which these pharmaceutical agents are not usually tested. One of the substances tested caused stronger inflammations, while another one inhibited them. As inflammatory reactions in the brain may be the underlying cause for epileptic disorders, it is vital to take the trigger for the disorder under consideration when selecting drugs for treatment, as the researchers concluded. They published their report in the journal “Epilepsia”.

Glial cells play a crucial role in the nervous system
Hannes Dambach from the Department for Neuroanatomy and Molecular Brain Research, together with a team of colleagues, studied how anti-epilepsy drugs affect the survival of glial cells in cultures. Glial cells are the largest cell group in the brain; they are crucial for supplying neurons with nutrients and affect immune and inflammatory responses. The question of how glial cells are affected by anti-epilepsy drugs had previously not been studied in depth. The RUB work group Clinical Neuroanatomy, headed by Prof Dr Pedro Faustmann, analysed four substances: valproic acid, gabapentin, phenytoin and carbamazepine.
Four anti-epilepsy drugs affect glial cells in different ways
Glial cells treated by the researchers with valproic adic and gabapentin had better survival chances than those treated with phenytoin and carbamazepine. However, carbamazepine had a positive effect, too: it reduced inflammatory responses. Valproic acid, on the other hand, turned out to be pro-inflammatory. In how far the anti-epilepsy drugs affected inflammations was also determined by the applied dose. Consequently, different drugs affected glial cells – and hence indirectly the neurons – in different ways.
Inflammatory responses should be taken under consideration in clinical studies
“Clinical studies should focus not only on the question in how far anti-epilepsy drugs affect the severity and frequency of epileptic seizures,” says Pedro Faustmann. “It is also necessary to test them with regard to the role they play in inflammatory responses in the central nervous system.” Thus, doctors could take the underlying inflammatory condition under consideration when selecting the right anti-epilepsy drug.
Epilepsy may have different causes
In Germany, between 0.5 and 1 percent of the population suffer from epilepsy that requires drug treatment. The disease may have many causes: genetic predisposition, disorders of the central nervous system after meningitis, traumatic brain injury and stroke. Inflammatory responses may also be caused by damage to the brain.

Anti-epilepsy drugs can cause inflammations

Physicians at the Ruhr-Universität Bochum (RUB) have been investigating if established anti-epilepsy drugs have anti-inflammatory or pro-inflammatory properties – an effect for which these pharmaceutical agents are not usually tested. One of the substances tested caused stronger inflammations, while another one inhibited them. As inflammatory reactions in the brain may be the underlying cause for epileptic disorders, it is vital to take the trigger for the disorder under consideration when selecting drugs for treatment, as the researchers concluded. They published their report in the journal “Epilepsia”.

Glial cells play a crucial role in the nervous system

Hannes Dambach from the Department for Neuroanatomy and Molecular Brain Research, together with a team of colleagues, studied how anti-epilepsy drugs affect the survival of glial cells in cultures. Glial cells are the largest cell group in the brain; they are crucial for supplying neurons with nutrients and affect immune and inflammatory responses. The question of how glial cells are affected by anti-epilepsy drugs had previously not been studied in depth. The RUB work group Clinical Neuroanatomy, headed by Prof Dr Pedro Faustmann, analysed four substances: valproic acid, gabapentin, phenytoin and carbamazepine.

Four anti-epilepsy drugs affect glial cells in different ways

Glial cells treated by the researchers with valproic adic and gabapentin had better survival chances than those treated with phenytoin and carbamazepine. However, carbamazepine had a positive effect, too: it reduced inflammatory responses. Valproic acid, on the other hand, turned out to be pro-inflammatory. In how far the anti-epilepsy drugs affected inflammations was also determined by the applied dose. Consequently, different drugs affected glial cells – and hence indirectly the neurons – in different ways.

Inflammatory responses should be taken under consideration in clinical studies

“Clinical studies should focus not only on the question in how far anti-epilepsy drugs affect the severity and frequency of epileptic seizures,” says Pedro Faustmann. “It is also necessary to test them with regard to the role they play in inflammatory responses in the central nervous system.” Thus, doctors could take the underlying inflammatory condition under consideration when selecting the right anti-epilepsy drug.

Epilepsy may have different causes

In Germany, between 0.5 and 1 percent of the population suffer from epilepsy that requires drug treatment. The disease may have many causes: genetic predisposition, disorders of the central nervous system after meningitis, traumatic brain injury and stroke. Inflammatory responses may also be caused by damage to the brain.

Filed under inflammation glial cells epilepsy antiepileptic drugs microglia nervous system neuroscience science

free counters