Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

107 notes

Study finds cognitive performance can be improved in teens months,years after traumatic brain injury

Traumatic brain injuries from sports, recreational activities, falls or car accidents are the leading cause of death and disability in children and adolescents. While previously it was believed that the window for brain recovery was at most one year after injury, new research from the Center for BrainHealth at The University of Texas at Dallas published online today in the open-access journal Frontiers in Neurology shows cognitive performance can be improved to significant degrees months, and even years, after injury, given targeted brain training.

image

"The after-effects of concussions and more severe brain injuries can be very different and more detrimental to a developing child or adolescent brain than an adult brain," said Dr. Lori Cook, study author and director of the Center for BrainHealth’s pediatric brain injury programs. "While the brain undergoes spontaneous recovery in the immediate days, weeks, and months following a brain injury, cognitive deficits may continue to evolve months to years after the initial brain insult when the brain is called upon to perform higher-order reasoning and critical thinking tasks."

Twenty adolescents, ages 12-20 who experienced a traumatic brain injury at least six months prior to participating in the research and were demonstrating gist reasoning deficits, or the inability to “get the essence” from dense information, were enrolled in the study. The participants were randomized into two different cognitive training groups – strategy-based gist reasoning training versus fact-based memory training.

Participants completed eight, 45-minute sessions over a one-month period. Researchers compared the effects of the two forms of training on the ability to abstract meaning and recall facts. Testing included pre- and post-training assessments, in which adolescents were asked to read several texts and then craft a high-level summary, drawing upon inferences to transform ideas into novel, generalized statements, and recall important facts.

After training, only the gist-reasoning group showed significant improvement in the ability to abstract meanings – a foundational cognitive skill to everyday life functionality. Additionally, the gist-reasoning-trained group showed significant generalized gains to untrained areas including executive functions of working memory (i.e., holding information in mind for use – such as performing mental addition or subtraction ) and inhibition (i.e., filtering out irrelevant information). The gist-reasoning training group also demonstrated increased memory for facts, even though this skill was not specifically targeted in training.

"These preliminary results are promising in that higher-order cognitive training that focuses on ‘big picture’ thinking improves cognitive performance in ways that matter to everyday life success," said Dr. Cook. "What we found was that training higher-order cognitive skills can have a positive impact on untrained key executive functions as well as lower-level, but also important, processes such as straightforward memory, which is used to remember details. While the study sample was small and a larger trial is needed, the real-life application of this training program is especially important for adolescents who are at a very challenging life-stage when they face major academic and social complexities. These cognitive challenges require reasoning, filtering, focusing, planning, self-regulation, activity management and combating ‘information overload,’ which is one of the chief complaints that teens with concussions express."

This research advances best practices by implicating changes to common treatment schedules for traumatic brain injury and concussion. The ability to achieve cognitive gains through a brain training treatment regimen at chronic stages of brain injury (6 months or longer) supports the need to monitor brain recovery annually and offer treatment when deficits persist or emerge later.

"Brain injuries require routine follow-up monitoring. We need to make sure that optimized brain recovery continues to support later cognitive milestones, and that is especially true in the case of adolescents," said Dr. Sandra Bond Chapman, study author, founder and chief director of the Center for BrainHealth and Dee Wyly Distinguished University Chair at The University of Texas at Dallas. "What’s promising is that no matter the severity of the injury or the amount of time since injury, brain performance improved when teens were taught how to strategically process incoming information in a meaningful way, instead of just focusing on rote memorization."

(Source: brainhealth.utdallas.edu)

Filed under TBI brain injury concussions cognitive performance frontal lobe neuroscience science

119 notes

MRI brain scans detect people with early Parkinson’s
The new MRI approach can detect people who have early-stage Parkinson’s disease with 85% accuracy, according to research published in Neurology, the medical journal of the American Academy of Neurology.
'At the moment we have no way to predict who is at risk of Parkinson's disease in the vast majority of cases,' says Dr Clare Mackay of the Department of Psychiatry at Oxford University, one of the joint lead researchers. 'We are excited that this MRI technique might prove to be a good marker for the earliest signs of Parkinson's. The results are very promising.'
Claire Bale, research communications manager at Parkinson’s UK, which funded the work, explains: ‘This new research takes us one step closer to diagnosing Parkinson’s at a much earlier stage – one of the biggest challenges facing research into the condition. By using a new, simple scanning technique the team at Oxford University have been able to study levels of activity in the brain which may suggest that Parkinson’s is present. One person every hour is diagnosed with Parkinson’s in the UK, and we hope that the researchers are able to continue to refine their test so that it can one day be part of clinical practice.’
Parkinson’s disease is characterised by tremor, slow movement, and stiff and inflexible muscles. It’s thought to affect around 1 in 500 people, meaning there are an estimated 127,000 people in the UK with the condition. There is currently no cure for the disease, although there are treatments that can reduce symptoms and maintain quality of life for as long as possible.
Parkinson’s disease is caused by the progressive loss of a particular set of nerve cells in the brain, but this damage to nerve cells will have been going on for a long time before symptoms become apparent.
If treatments are to be developed that can slow or halt the progression of the disease before it affects people significantly, the researchers say, we need methods to be able to identify people at risk before symptoms take hold.
Conventional MRI cannot detect early signs of Parkinson’s, so the Oxford researchers used an MRI technique, called resting-state fMRI, in which people are simply required to stay still in the scanner. They used the MRI data to look at the ‘connectivity’, or strength of brain networks, in the basal ganglia – part of the brain known to be involved in Parkinson’s disease.
The team compared 19 people with early-stage Parkinson’s disease while not on medication with 19 healthy people, matched for age and gender. They found that the Parkinson’s patients had much lower connectivity in the basal ganglia.
The researchers were able to define a cut-off or threshold level of connectivity. Falling below this level was able to predict who had Parkinson’s disease with 100% sensitivity (it picked up everyone with Parkinson’s) and 89.5% specificity (it picked up few people without Parkinson’s – there were few false positives).
Dr Mackay explains: ‘Our MRI approach showed a very strong difference in connectivity between those who had Parkinson’s disease and those that did not. So much so, that we wondered if it was too good to be true and carried out a validation test in a second group of patients. We got a similar result the second time.’
The scientists applied their MRI test to a second group of 13 early-stage Parkinson’s patients as a validation of the approach. They correctly identified 11 out of the 13 patients (85% accuracy).
'We think that our MRI test will be relevant for diagnosis of Parkinson's,' says joint lead researcher Dr Michele Hu of the Nuffield Department of Clinical Neurosciences at Oxford University and the Oxford University Hospitals NHS Trust. 'We tested it in people with early-stage Parkinson's. But because it is so sensitive in these patients, we hope it will be able to predict who is at risk of disease before any symptoms have developed. However, this is something that we still have to show in further research.'
To see if this is the case, the Oxford University researchers are now carrying out further studies of their MRI technique with people who are at increased risk of Parkinson’s.

MRI brain scans detect people with early Parkinson’s

The new MRI approach can detect people who have early-stage Parkinson’s disease with 85% accuracy, according to research published in Neurology, the medical journal of the American Academy of Neurology.

'At the moment we have no way to predict who is at risk of Parkinson's disease in the vast majority of cases,' says Dr Clare Mackay of the Department of Psychiatry at Oxford University, one of the joint lead researchers. 'We are excited that this MRI technique might prove to be a good marker for the earliest signs of Parkinson's. The results are very promising.'

Claire Bale, research communications manager at Parkinson’s UK, which funded the work, explains: ‘This new research takes us one step closer to diagnosing Parkinson’s at a much earlier stage – one of the biggest challenges facing research into the condition. By using a new, simple scanning technique the team at Oxford University have been able to study levels of activity in the brain which may suggest that Parkinson’s is present. One person every hour is diagnosed with Parkinson’s in the UK, and we hope that the researchers are able to continue to refine their test so that it can one day be part of clinical practice.’

Parkinson’s disease is characterised by tremor, slow movement, and stiff and inflexible muscles. It’s thought to affect around 1 in 500 people, meaning there are an estimated 127,000 people in the UK with the condition. There is currently no cure for the disease, although there are treatments that can reduce symptoms and maintain quality of life for as long as possible.

Parkinson’s disease is caused by the progressive loss of a particular set of nerve cells in the brain, but this damage to nerve cells will have been going on for a long time before symptoms become apparent.

If treatments are to be developed that can slow or halt the progression of the disease before it affects people significantly, the researchers say, we need methods to be able to identify people at risk before symptoms take hold.

Conventional MRI cannot detect early signs of Parkinson’s, so the Oxford researchers used an MRI technique, called resting-state fMRI, in which people are simply required to stay still in the scanner. They used the MRI data to look at the ‘connectivity’, or strength of brain networks, in the basal ganglia – part of the brain known to be involved in Parkinson’s disease.

The team compared 19 people with early-stage Parkinson’s disease while not on medication with 19 healthy people, matched for age and gender. They found that the Parkinson’s patients had much lower connectivity in the basal ganglia.

The researchers were able to define a cut-off or threshold level of connectivity. Falling below this level was able to predict who had Parkinson’s disease with 100% sensitivity (it picked up everyone with Parkinson’s) and 89.5% specificity (it picked up few people without Parkinson’s – there were few false positives).

Dr Mackay explains: ‘Our MRI approach showed a very strong difference in connectivity between those who had Parkinson’s disease and those that did not. So much so, that we wondered if it was too good to be true and carried out a validation test in a second group of patients. We got a similar result the second time.’

The scientists applied their MRI test to a second group of 13 early-stage Parkinson’s patients as a validation of the approach. They correctly identified 11 out of the 13 patients (85% accuracy).

'We think that our MRI test will be relevant for diagnosis of Parkinson's,' says joint lead researcher Dr Michele Hu of the Nuffield Department of Clinical Neurosciences at Oxford University and the Oxford University Hospitals NHS Trust. 'We tested it in people with early-stage Parkinson's. But because it is so sensitive in these patients, we hope it will be able to predict who is at risk of disease before any symptoms have developed. However, this is something that we still have to show in further research.'

To see if this is the case, the Oxford University researchers are now carrying out further studies of their MRI technique with people who are at increased risk of Parkinson’s.

Filed under parkinson's disease basal ganglia neuroimaging neuroscience science

102 notes

Mechanism explains complex brain wiring

How neurons are created and integrate with each other is one of biology’s greatest riddles. Researcher Dietmar Schmucker from VIB-KU Leuven unravels a part of the mystery in Science magazine. He describes a mechanism that explains novel aspects of how the wiring of highly branched neurons in the brain works. These new insights into how complex neural networks are formed are very important for understanding and treating neurological diseases.

image

Neurons, or nerve cells
It is estimated that a person has 100 billion neurons, or nerve cells. These neurons have thin, elongated, highly branched offshoots called dendrites and axons. They are the body’s information and signal processors. The dendrites receive electrical impulses from the other neurons and conduct these to the cell body. The cell body then decides whether stimuli will or will not be transferred to other cells via the axon.

The brain’s wiring is very complex. Although the molecular mechanisms that explain the linear connection between neurons have already been described numerous times, little is as yet known about how the branched wiring works in the brain.

The connections between nerve cells
Prior research by Dietmar Schmucker and his team lead to the identification of the Dscam1 protein in the fruit fly. The neuron can create many different protein variations, or isoforms, from this same protein. The specific set of isoforms that occurs on a neuron’s cell surface determines the neuron’s unique molecular identity and plays an important role in the establishment of accurate connections. In other words, it describes why certain neurons either come into contact with each other or reject each other.

Recent work by Haihuai He and Yoshiaki Kise from Dietmar’s team indicates that different sets of Dscam1 isoforms occur inside one axon, between the newly formed offshoots amongst each other. If this was not the case, then only linear connections could come about between neurons. These results indicate for the first time the significance of why different sets of the same protein variations can occur in one neuron and it could explain mechanistically how this contributes to the complex wiring in our brain.

Clinical impact
Although this research was done with fruit flies, it also provides new insights that help explain the wiring and complex interactions of the human brain and shine a new light on neurological development disorders such as autism. Thorough knowledge of nerve cell creation and their neural interactions is considered essential knowledge for the future possibility of using stem cell therapy as standard treatment for certain nervous system disorders.

Questions
Given that this research can raise many questions, we would like to refer your questions in your report or article to the email address that the VIB has made available for this purpose. All questions regarding this and other medical research can be directed to: patients@vib.be.

Relevant scientific publication
The above-mentioned research was published in the prominent magazine Science.

(Source: vib.be)

Filed under neurons Dscam1 axons dendrites fruit flies neural networks neuroscience science

233 notes

From contemporary syntax to human language’s deep origins



On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.
Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.
From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.
But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?
“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”
Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.
“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”
Introducing the ‘integration hypothesis’
The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.
The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.
The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.
Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.
But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.
However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.
The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”
An inherent capacity
To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.
As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.
“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”
Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.
“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”
Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”
Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.
“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

From contemporary syntax to human language’s deep origins

On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.

Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.

From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.

But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?

“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”

Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.

“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”

Introducing the ‘integration hypothesis’

The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.

The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.

The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.

Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.

But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.

However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.

The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”

An inherent capacity

To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.

As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.

“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”

Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.

“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”

Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”

Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.

“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

Filed under language birdsong evolution linguistics psychology neuroscience science

216 notes

Gene mutation discovery could explain brain disorders in children
Researchers have discovered that mutations in one of the brain’s key genes could be responsible for impaired mental function in children born with an intellectual disability.
The research, published today in the journal, Human Molecular Genetics, proves that the gene, TUBB5, is essential for a healthy functioning brain.
It’s estimated that intellectual disability affects up to four per cent of people worldwide, and two per cent of all Australians. One of the ways in which intellectual disability occurs is through genetic mutations, which cause problems with normal fetal brain development.  
During fetal brain development, TUBB5 is essential for the proper placement and wiring of new neurons. When the gene is mutated, the brain, which sends and receives messages to the rest of the body, is impaired.
Lead researcher, Dr Julian Heng, from the Australian Regenerative Medicine Institute (ARMI) at Monash University, said genetic mutations to TUBB5 could be responsible for a range of intellectual disabilities. It could also affect the development of basic motor skills such as walking.
“TUBB5 works like a type of scaffolding inside neurons, enabling them to shape their connections to other neurons, so it’s essential for healthy brain development. If the scaffolding is faulty, in this case if TUBB5 mutates, it can have serious consequences,” Dr Heng said.
These new findings build on the team’s collaborative work with researchers in Austria, which led to the discovery of TUBB5 mutations in human brain disorders in 2012. By looking at just three unrelated patients with microcephaly, a rare brain disease in children, the team found striking similarities – each had a mutation to TUBB5. The team also provided the first evidence that the TUBB5 mutations were responsible for each patient’s disorder.
Dr Heng said the research could have important implications, not only for intellectual disabilities but also for a wide range of developmental disorders.
“Learning more about the TUBB5 gene and its mutations could reveal how it shapes the connections of neurons in normal and diseased brain states.
“We’re just at the beginning of this work but if we can understand why and how mutations occur to TUBB5, we may even be able to repair these mutations. In the future, we believe this work will enable us to develop new therapies to transform people’s lives,” Dr Heng said.
The work may potentially lead to new information about the causes and possible treatments for other brain developmental syndromes, including autism, a condition that affects as many as 1 in 160 children.
Dr Heng said that because TUBB5 belongs to a family of genes which produce the scaffolding in neurons, it means that there is scope for further study into its impact.
“By learning what these scaffolding proteins do to help neurons make brain circuits, we might be able to pinpoint the underlying causes of a wide range of brain disorders in children, and develop more targeted treatments,” Dr Heng said.
Scientists believe that in the future this knowledge, combined with regenerative medicine techniques, could also aid the replacement of neurons in times of brain injury or disease.
The next phase of the research will be to develop a working model to better understand how TUBB5 can be targeted for gene therapy.

Gene mutation discovery could explain brain disorders in children

Researchers have discovered that mutations in one of the brain’s key genes could be responsible for impaired mental function in children born with an intellectual disability.

The research, published today in the journal, Human Molecular Genetics, proves that the gene, TUBB5, is essential for a healthy functioning brain.

It’s estimated that intellectual disability affects up to four per cent of people worldwide, and two per cent of all Australians. One of the ways in which intellectual disability occurs is through genetic mutations, which cause problems with normal fetal brain development.  

During fetal brain development, TUBB5 is essential for the proper placement and wiring of new neurons. When the gene is mutated, the brain, which sends and receives messages to the rest of the body, is impaired.

Lead researcher, Dr Julian Heng, from the Australian Regenerative Medicine Institute (ARMI) at Monash University, said genetic mutations to TUBB5 could be responsible for a range of intellectual disabilities. It could also affect the development of basic motor skills such as walking.

“TUBB5 works like a type of scaffolding inside neurons, enabling them to shape their connections to other neurons, so it’s essential for healthy brain development. If the scaffolding is faulty, in this case if TUBB5 mutates, it can have serious consequences,” Dr Heng said.

These new findings build on the team’s collaborative work with researchers in Austria, which led to the discovery of TUBB5 mutations in human brain disorders in 2012. By looking at just three unrelated patients with microcephaly, a rare brain disease in children, the team found striking similarities – each had a mutation to TUBB5. The team also provided the first evidence that the TUBB5 mutations were responsible for each patient’s disorder.

Dr Heng said the research could have important implications, not only for intellectual disabilities but also for a wide range of developmental disorders.

“Learning more about the TUBB5 gene and its mutations could reveal how it shapes the connections of neurons in normal and diseased brain states.

“We’re just at the beginning of this work but if we can understand why and how mutations occur to TUBB5, we may even be able to repair these mutations. In the future, we believe this work will enable us to develop new therapies to transform people’s lives,” Dr Heng said.

The work may potentially lead to new information about the causes and possible treatments for other brain developmental syndromes, including autism, a condition that affects as many as 1 in 160 children.

Dr Heng said that because TUBB5 belongs to a family of genes which produce the scaffolding in neurons, it means that there is scope for further study into its impact.

“By learning what these scaffolding proteins do to help neurons make brain circuits, we might be able to pinpoint the underlying causes of a wide range of brain disorders in children, and develop more targeted treatments,” Dr Heng said.

Scientists believe that in the future this knowledge, combined with regenerative medicine techniques, could also aid the replacement of neurons in times of brain injury or disease.

The next phase of the research will be to develop a working model to better understand how TUBB5 can be targeted for gene therapy.

Filed under children TUBB5 brain disorders neurons genetics neuroscience science

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

215 notes

(Image caption: At left, the brains of adults who had ADHD as children but no longer have it show synchronous activity between the posterior cingulate cortex (the larger red region) and the medial prefrontal cortex (smaller red region). At right, the brains of adults who continue to experience ADHD do not show this synchronous activity. Illustration: Jose-Luis Olivares/MIT, based on images courtesy of the researchers)
Inside the adult ADHD brain
About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.
In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.
Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.
“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.
Shifting brain patterns
This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.
The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.
“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”
In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.
In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.
“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”
Lingering problems
However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.
In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.
Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.
The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.
“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

(Image caption: At left, the brains of adults who had ADHD as children but no longer have it show synchronous activity between the posterior cingulate cortex (the larger red region) and the medial prefrontal cortex (smaller red region). At right, the brains of adults who continue to experience ADHD do not show this synchronous activity. Illustration: Jose-Luis Olivares/MIT, based on images courtesy of the researchers)

Inside the adult ADHD brain

About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.

In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.

Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.

“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.

Shifting brain patterns

This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.

The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.

“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”

In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.

In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.

“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”

Lingering problems

However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.

In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.

Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.

The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.

“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

Filed under ADHD neuroimaging prefrontal cortex default mode network neuroscience science

185 notes

"All systems go" for a paralyzed person to kick off the World Cup
The Walk Again Project is an international collaboration of more than one hundred scientists, led by Prof. Miguel Nicolelis of Duke University and the International Institute for Neurosciences of Natal, Brazil. Prof. Gordon Cheng, head of the Institute for Cognitive Systems at the Technische Universität München (TUM), is a leading partner.
Eight Brazilian patients, men and women between 20 and 40 years of age who are paralyzed from the waist down, have been training for months to use the exoskeleton. The system works by recording electrical activity in the patient’s brain, recognizing his or her intention – such as to take a step or kick a ball – and translating that to action. It also gives the patient tactile feedback using sensitive artificial skin created by Cheng’s institute.
The feeling of touching the ground
Inspiration for this so-called CellulARSkin technology – as well as for the Walk Again Project itself – came from a 2008 collaboration. As Cheng sums up that complex and widely reported experiment, “Miguel set up a monkey walking on a treadmill in North Carolina, and then I made my humanoid robot walk with the signal in Kyoto.” It was a short step for the researchers to envision a paralyzed person walking with the help of a robotic exoskeleton that could be guided by mental activity alone.
"Our brains are very adaptive in the way that we can extend our embodiment to use tools," Cheng says, "as in driving a car or eating with chopsticks. After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body." It was clear that technical advances would be required to allow a relatively compact, lightweight exoskeleton to be assembled, and that visual feedback would not be enough. A sense of touch would be essential for the patient’s emotional comfort as well as control over the exoskeleton. Thus the challenge was to give a paralyzed person, together with the ability to walk, the feeling of touching the ground.
A versatile solution
Upon joining TUM in 2010, Cheng made it a research priority for his institute to improve on the state of the art in tactile sensing for robotic systems. The result, CellulARSkin, provides a framework for a robust and self-organizing surface sensor network. It can be implemented using standard off-the-shelf hardware and thus will benefit from future improvements in miniaturization, performance, and cost.
The basic unit is a flat, six-sided package of electronic components including a low-power-consumption microprocessor as well as sensors that detect pre-touch proximity, pressure, vibration, temperature, and even movement in three-dimensional space. Any number of these individual “cells” can be networked together in a honeycomb pattern, protected in the current prototype by a rubbery skin of molded elastomer.
"It’s not just the sensor that’s important," Cheng says. "The intelligence of the sensor is even more important." Cooperation among the networked cells, and between the network and a central system, allows CellulARSkin to configure itself for each specific application and to recover automatically from certain kinds of damage. These capabilities offer advantages in enabling smarter, safer interaction of machines with people, and in rapid setup of industrial robots – as is being pursued in the EU-sponsored project "Factory in a Day."In the Walk Again Project, CellulARSkin is being used in two ways. Integrated with the exoskeleton, for example on the bottoms of the feet, the artificial skin sends signals to tiny motors that vibrate against the patient’s arms. Through training with this kind of indirect sensory feedback, a patient can learn to incorporate the robotic legs and feet into his or her own body schema. CellulARSkin is also being wrapped around parts of the patient’s own body to help the medical team monitor for any signs of distress or discomfort.A milestone, but “just the beginning”"I think some people see the World Cup opening as the end," Cheng says, "but it’s really just the beginning. This may be a major milestone, but we have a lot more work to do." He views the event as a public demonstration of what science can do for people. "Also, I see it as a great tribute to all the patients’ hard work and their bravery!"

"All systems go" for a paralyzed person to kick off the World Cup

The Walk Again Project is an international collaboration of more than one hundred scientists, led by Prof. Miguel Nicolelis of Duke University and the International Institute for Neurosciences of Natal, Brazil. Prof. Gordon Cheng, head of the Institute for Cognitive Systems at the Technische Universität München (TUM), is a leading partner.

Eight Brazilian patients, men and women between 20 and 40 years of age who are paralyzed from the waist down, have been training for months to use the exoskeleton. The system works by recording electrical activity in the patient’s brain, recognizing his or her intention – such as to take a step or kick a ball – and translating that to action. It also gives the patient tactile feedback using sensitive artificial skin created by Cheng’s institute.

The feeling of touching the ground

Inspiration for this so-called CellulARSkin technology – as well as for the Walk Again Project itself – came from a 2008 collaboration. As Cheng sums up that complex and widely reported experiment, “Miguel set up a monkey walking on a treadmill in North Carolina, and then I made my humanoid robot walk with the signal in Kyoto.” It was a short step for the researchers to envision a paralyzed person walking with the help of a robotic exoskeleton that could be guided by mental activity alone.

"Our brains are very adaptive in the way that we can extend our embodiment to use tools," Cheng says, "as in driving a car or eating with chopsticks. After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body." It was clear that technical advances would be required to allow a relatively compact, lightweight exoskeleton to be assembled, and that visual feedback would not be enough. A sense of touch would be essential for the patient’s emotional comfort as well as control over the exoskeleton. Thus the challenge was to give a paralyzed person, together with the ability to walk, the feeling of touching the ground.

A versatile solution

Upon joining TUM in 2010, Cheng made it a research priority for his institute to improve on the state of the art in tactile sensing for robotic systems. The result, CellulARSkin, provides a framework for a robust and self-organizing surface sensor network. It can be implemented using standard off-the-shelf hardware and thus will benefit from future improvements in miniaturization, performance, and cost.

The basic unit is a flat, six-sided package of electronic components including a low-power-consumption microprocessor as well as sensors that detect pre-touch proximity, pressure, vibration, temperature, and even movement in three-dimensional space. Any number of these individual “cells” can be networked together in a honeycomb pattern, protected in the current prototype by a rubbery skin of molded elastomer.

"It’s not just the sensor that’s important," Cheng says. "The intelligence of the sensor is even more important." Cooperation among the networked cells, and between the network and a central system, allows CellulARSkin to configure itself for each specific application and to recover automatically from certain kinds of damage. These capabilities offer advantages in enabling smarter, safer interaction of machines with people, and in rapid setup of industrial robots – as is being pursued in the EU-sponsored project "Factory in a Day."

In the Walk Again Project, CellulARSkin is being used in two ways. Integrated with the exoskeleton, for example on the bottoms of the feet, the artificial skin sends signals to tiny motors that vibrate against the patient’s arms. Through training with this kind of indirect sensory feedback, a patient can learn to incorporate the robotic legs and feet into his or her own body schema. CellulARSkin is also being wrapped around parts of the patient’s own body to help the medical team monitor for any signs of distress or discomfort.

A milestone, but “just the beginning”

"I think some people see the World Cup opening as the end," Cheng says, "but it’s really just the beginning. This may be a major milestone, but we have a lot more work to do." He views the event as a public demonstration of what science can do for people. "Also, I see it as a great tribute to all the patients’ hard work and their bravery!"

Filed under BMI exoskeleton robotics Walk Again Project CellulARSkin neuroscience science

160 notes

That Sounds Familiar, But Why?

When it comes to familiarity, a slew of memories including seemingly unrelated ones can come flooding into the brain, according to mathematical theories called global similarity models.

image

After conducting an fMRI study on memory and categorization, researchers including a Texas Tech University psychologist have shown for the first time that these mathematical models seem to correctly explain processing in the medial temporal lobes, a region of the brain associated with long-term memory that is disrupted by memory disorders like Alzheimer’s disease.

The findings were published in The Journal of Neuroscience.

Tyler Davis, assistant director of Texas Tech’s Neuroimaging Institute and an assistant professor of psychology, specializes in neurobiological approaches to learning and memory. He was part of a team that delved into global similarity models.

“Since at least the 1980s, scientists researching memory have believed that when a person finds someone’s face or a new experience familiar, that person is not simply retrieving a memory of only this previous experience, but memories of many other related and unrelated experiences as well,” Davis said. “Formal mathematical theories of memory called global similarity models suggest that when we judge familiarity, we match an experience, such as a face or a trip to a restaurant, to all of the memories that we have stored in our brains. Our recent work using fMRI suggests these models are correct.”

People may believe when they see someone’s familiar face or take a trip to a familiar restaurant, they only activate the most similar or recent memories for comparison. However, Davis said this is not the case. According to global similarity models, the feeling of familiarity for the taste of brisket at a particular restaurant draws on a spectrum of memories that a person has stored in his or her brain.

Eating the brisket can activate memories not only of a previous trip to that restaurant, but also of the décor, eating brisket at a similar restaurant, what that person’s home-cooked brisket tastes like and even seemingly tangential memories such as a recent trip to another city.

“In terms of global similarity theories and our new findings, the important thing is when you are judging familiarity, your brain doesn’t just retrieve the most relevant memories but many other memories as well,” Davis said. “This seems counter-intuitive to how memory feels. We often feel like we are just retrieving that previous trip to that one particular restaurant when we are asked whether we’d been there before, but there is a lot of behavioral evidence that we activate many other memories as well when we judge familiarity.”

This does not mean that every memory we have stored contributes to familiarity in the same way. The more similar a previous memory is to the current experience, the more it will contribute to judgments of familiarity.

In terms of the brisket example, Davis said, previous trips to the restaurant are going to impact the familiarity more than dissimilar memories, such as the recent trip out of town. However, similarity from these other less-related experiences can have a measurable effect in judgments of familiarity.

In his recent research, Davis and others used fMRI to examine how memory similarity related to behavioral measures of familiarity, in terms of activation patterns in the medial temporal lobes.

“We found that peoples’ memory for the items in our experiments was related to their activation patterns in the medial temporal lobes in a manner that was anticipated by mathematical global similarity models,” Davis said. “The more similar the activation pattern for an item was to all of the other activation patterns, the more strongly people remembered it. This is consistent with global similarity models, which suggest that the items that are most similar to all other items stored in memory will be most familiar.”

The findings suggest that global similarity models may have a neurobiological basis, he said. This is evidence that similarity, in terms of neural processing, may impact memory. People may find things familiar not just because they are identical to things we’ve previously experienced, but because they are similar to a number of things we’ve previously experienced.

(Source: today.ttu.edu)

Filed under neuroimaging global similarity models memory neuroscience science

80 notes

Game Technology Teaches Mice and Men to Hear Better in Noisy Environments

The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard Medical School and Harvard University programmed a new type of game that trained both mice and humans to enhance their ability to discriminate soft sounds in noisy backgrounds. Their findings will be published in PNAS Online Early Edition the week of June 9-13, 2014.

image

In the experiment, adult humans and mice with normal hearing were trained on a rudimentary ‘audiogame’ inspired by sensory foraging behavior that required them to discriminate changes in the loudness of a tone presented in a moderate level of background noise. Their findings suggest new therapeutic options for clinical populations that receive little benefit from conventional sensory rehabilitation strategies.

“Like the children’s game ‘hot and cold’, our game provided instantaneous auditory feedback that allowed our human and mouse subjects to hone in on the location of a hidden target,” said senior author Daniel Polley, Ph.D., director of the Mass. Eye and Ear’s Amelia Peabody Neural Plasticity Unit of the Eaton-Peabody Laboratories and assistant professor of otology and laryngology at Harvard Medical School. “Over the course of training, both species learned adaptive search strategies that allowed them to more efficiently convert noisy, dynamic audio cues into actionable information for finding the target. To our surprise, human subjects who mastered this simple game over the course of 30 minutes of daily training for one month exhibited a generalized improvement in their ability to understand speech in noisy background conditions. Comparable improvements in the processing of speech in high levels of background noise were not observed for control subjects who heard the sounds of the game but did not actually play the game.”

The researchers recorded the electrical activity of neurons in auditory regions of the mouse cerebral cortex to gain some insight into how training might have boosted the ability of the brain to separate signal from noise. They found that training substantially altered the way the brain encoded sound.

In trained mice, many neurons became highly sensitive to faint sounds that signaled the location of the target in the game. Moreover, neurons displayed increased resistance to noise suppression; they retained an ability to encode faint sounds even under conditions of elevated background noise.

“Again, changes of this ilk were not observed in control mice that watched (and listened) to their counterparts play the game. Active participation in the training was required; passive listening was not enough,” Dr. Polley said.

These findings illustrate the utility of brain training exercises that are inspired by careful neuroscience research. “When combined with conventional assistive devices such as hearing aids or cochlear implants, ‘audiogames’ of the type we describe here may be able to provide the hearing impaired with an improved ability to reconnect to the auditory world. Of particular interest is the finding that brain training improved speech processing in noisy backgrounds – a listening environment where conventional hearing aids offer limited benefit,” concluded Dr. Jonathon Whitton, lead author on the paper. Dr. Whitton is a principal investigator at the Amelia Peabody Neural Plasticity Unit and affiliated with the Program in Speech Hearing Bioscience and Technology, Harvard–Massachusetts Institute of Technology Division of Health, Sciences, and Technology.

(Source: masseyeandear.org)

Filed under hearing hearing loss auditory cortex foraging noise suppression neuroscience science

free counters