Neuroscience

Articles and news from the latest research reports.

Posts tagged speech

190 notes

Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.
Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.
The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.
Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.
All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.
In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.
Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.
The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.
Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.
The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.
Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.
This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Neuroscientists identify key role of language gene

Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Filed under Foxp2 gene mutation language language acquisition speech learning neuroscience science

146 notes

Device to help people with Parkinson’s disease communicate better now available
SpeechVive Inc. announced Wednesday (Sept. 10) the commercial launch of the SpeechVive device intended to help people with a soft voice due to Parkinson’s disease speak more loudly and communicate more effectively.
The device is now available to try as a demo through the National Parkinson’s Disease Foundation’s Centers of Excellence prior to purchasing. People who suffer from a soft voice due to Parkinson’s disease can make an appointment at any of these centers: the Muhammad Ali Parkinson Center at Barrow Neurological Institute in Phoenix; the University of Florida, Gainesville, Florida; University of North Carolina, Chapel Hill, North Carolina; Struthers Parkinson’s Center, Minneapolis, Minnesota; and Baylor College of Medicine, Waco, Texas.
"We are providing demo units and training at no cost to as many of the National Parkinson’s Centers of Excellence as are interested in offering SpeechVive in conjunction with or as an alternative to speech therapy," said Steve Mogensen, president and CEO of SpeechVive. "We also are offering the SpeechVive units and training to professionals at Veterans Administration Medical Centers across the country. The first VAMC to offer SpeechVive is in Cincinnati, Ohio."
The SpeechVive device also is available to try at the M.D. Steer Speech and Hearing Clinic at Purdue University in West Lafayette, Indiana.
The technology was developed over the past decade by Jessica Huber, associate professor in Purdue’s Department of Speech, Language and Hearing Sciences and licensed through the Purdue Office of Technology Commercialization. The focus of Huber’s research is the development and testing of behavioral treatments to improve communication and quality of life in older adults and people with degenerative motor diseases.
SpeechVive reduces the speech impairments associated with Parkinson’s disease, which cause people with the disease to speak in a hushed, whispery voice and to have mumbled speech. People with Parkinson’s disease are commonly affected in their ability to communicate effectively.
"The clinical data we have collected over the past four years demonstrates that SpeechVive is effective in 90 percent of the people using the device," Huber said. "I am proud of the improvements in communication and quality of life demonstrated in our clinical studies. I look forward to seeing the device on the market so that more people with Parkinson’s disease will have access to it."
More than 1.5 million people in the United States are diagnosed with Parkinson’s disease, and it is one of the most common degenerative neurological diseases. About 89 percent of those with the disease have voice-related change affecting how loudly they speak, and at least 45 percent have speech-related change affecting how clearly they speak.

Device to help people with Parkinson’s disease communicate better now available

SpeechVive Inc. announced Wednesday (Sept. 10) the commercial launch of the SpeechVive device intended to help people with a soft voice due to Parkinson’s disease speak more loudly and communicate more effectively.

The device is now available to try as a demo through the National Parkinson’s Disease Foundation’s Centers of Excellence prior to purchasing. People who suffer from a soft voice due to Parkinson’s disease can make an appointment at any of these centers: the Muhammad Ali Parkinson Center at Barrow Neurological Institute in Phoenix; the University of Florida, Gainesville, Florida; University of North Carolina, Chapel Hill, North Carolina; Struthers Parkinson’s Center, Minneapolis, Minnesota; and Baylor College of Medicine, Waco, Texas.

"We are providing demo units and training at no cost to as many of the National Parkinson’s Centers of Excellence as are interested in offering SpeechVive in conjunction with or as an alternative to speech therapy," said Steve Mogensen, president and CEO of SpeechVive. "We also are offering the SpeechVive units and training to professionals at Veterans Administration Medical Centers across the country. The first VAMC to offer SpeechVive is in Cincinnati, Ohio."

The SpeechVive device also is available to try at the M.D. Steer Speech and Hearing Clinic at Purdue University in West Lafayette, Indiana.

The technology was developed over the past decade by Jessica Huber, associate professor in Purdue’s Department of Speech, Language and Hearing Sciences and licensed through the Purdue Office of Technology Commercialization. The focus of Huber’s research is the development and testing of behavioral treatments to improve communication and quality of life in older adults and people with degenerative motor diseases.

SpeechVive reduces the speech impairments associated with Parkinson’s disease, which cause people with the disease to speak in a hushed, whispery voice and to have mumbled speech. People with Parkinson’s disease are commonly affected in their ability to communicate effectively.

"The clinical data we have collected over the past four years demonstrates that SpeechVive is effective in 90 percent of the people using the device," Huber said. "I am proud of the improvements in communication and quality of life demonstrated in our clinical studies. I look forward to seeing the device on the market so that more people with Parkinson’s disease will have access to it."

More than 1.5 million people in the United States are diagnosed with Parkinson’s disease, and it is one of the most common degenerative neurological diseases. About 89 percent of those with the disease have voice-related change affecting how loudly they speak, and at least 45 percent have speech-related change affecting how clearly they speak.

Filed under parkinson's disease speech speechvive communication neuroscience science

542 notes

Months before their first words, babies’ brains rehearse speech mechanics
Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.
University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.
The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.
“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”
Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.
The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.
“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”
In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.
The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.
The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.
This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.
In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.
The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”
“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

Months before their first words, babies’ brains rehearse speech mechanics

Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.

University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.

The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.

“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”

Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.

The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.

“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”

In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.

The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.

The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.

This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.

In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.

The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”

“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

Filed under infants speech speech perception language development brain activity psychology neuroscience science

78 notes

Infants Benefit from Implants with More Frequency Sounds
A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.
Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.
“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.
Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.
Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.
Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.
More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.
Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”
The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.
The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.
“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”
Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Infants Benefit from Implants with More Frequency Sounds

A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.

Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.

“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.

Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.

Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.

Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.

More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.

Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”

The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.

The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.

“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”

Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Filed under implants cochlear implants speech speech perception hearing neuroscience science

188 notes

In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

In recognizing speech sounds, the brain does not work the way a computer does

How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 

"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.

"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 

Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 

"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."

Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.

In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 

The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.

While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

Filed under language speech neuroimaging brain activity linguistics psychology neuroscience science

226 notes

People Rely on What They Hear to Know What They’re Saying

You know what you’re going to say before you say it, right? Not necessarily, research suggests. A study from researchers at Lund University in Sweden shows that auditory feedback plays an important role in helping us determine what we’re saying as we speak. The study is published in Psychological Science, a journal of the Association for Psychological Science.

“Our results indicate that speakers listen to their own voices to help specify the meaning of what they are saying,” says researcher Andreas Lind of Lund University, lead author of the study.

image

Theories about how we produce speech often assume that we start with a clear, preverbal idea of what to say that goes through different levels of encoding to finally become an utterance.

But the findings from this study support an alternative model in which speech is more than just a dutiful translation of this preverbal message:

“These findings suggest that the meaning of an utterance is not entirely internal to the speaker, but that it is also determined by the feedback we receive from our utterances, and from the inferences we draw from the wider conversational context,” Lind explains.

For the study, Lind and colleagues recruited Swedish participants to complete a classic Stroop test, which provided a controlled linguistic setting. During the Stroop test, participants were presented with various color words (e.g., “red” or “green”) one at a time on a screen and were tasked with naming the color of the font that each word was printed in, rather than the color that the word itself signified.

The participants wore headphones that provided real-time auditory feedback as they took the test — unbeknownst to them, the researchers had rigged the feedback using a voice-triggered playback system. This system allowed the researchers to substitute specific phonologically similar but semantically distinct words (“grey”, “green”) in real time, a technique they call “Real-time Speech Exchange” or RSE.

Data from the 78 participants indicated that when the timing of the insertions was right, only about one third of the exchanges were detected.

On many of the non-detected trials, when asked to report what they had said, participants reported the word they had heard through feedback, rather than the word they had actually said. Because accuracy on the task was actually very high, the manipulated feedback effectively led participants to believe that they had made an error and said the wrong word.

Overall, Lind and colleagues found that participants accepted the manipulated feedback as having been self-produced on about 85% of the non-detected trials.

Together, these findings suggest that our understanding of our own utterances, and our sense of agency for those utterances, depend to some degree on inferences we make after we’ve made them.

Most surprising, perhaps, is the fact that while participants received several indications about what they actually said — from their tongue and jaw, from sound conducted through the bone, and from their memory of the correct alternative on the screen — they still treated the manipulated words as though they were self-produced.

This suggests, says Lind, that the effect may be even more pronounced in everyday conversation, which is less constrained and more ambiguous than the context offered by the Stroop test.

“In future studies, we want to apply RSE to situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way an interview or conversation develops,” says Lind.

“While this is technically challenging to execute, it could potentially tell us a great deal about how meaning and communicative intentions are formed in natural discourse,” he concludes.

Filed under speech speech perception monitoring cognitive processing psychology neuroscience science

161 notes

Speech means using both sides of our brain

We use both sides of our brain for speech, a finding by researchers at New York University and NYU Langone Medical Center that alters previous conceptions about neurological activity. The results, which appear in the journal Nature, also offer insights into addressing speech-related inhibitions caused by stroke or injury and lay the groundwork for better rehabilitation methods.

image

“Our findings upend what has been universally accepted in the scientific community—that we use only one side of our brains for speech,” says Bijan Pesaran, an associate professor in NYU’s Center for Neural Science and the study’s senior author. “In addition, now that we have a firmer understanding of how speech is generated, our work toward finding remedies for speech afflictions is much better informed.”

Many in the scientific community have posited that both speech and language are lateralized—that is, we use only one side of our brains for speech, which involves listening and speaking, and language, which involves constructing and understanding sentences. However, the conclusions pertaining to speech generally stem from studies that rely on indirect measurements of brain activity, raising questions about characterizing speech as lateralized.

To address this matter, the researchers directly examined the connection between speech and the neurological process.

Specifically, the study relied on data collected at NYU ECoG, a center where brain activity is recorded directly from patients implanted with specialized electrodes placed directly inside and on the surface of the brain while the patients are performing sensory and cognitive tasks. Here, the researchers examined brain functions of patients suffering from epilepsy by using methods that coincided with their medical treatment.

“Recordings directly from the human brain are a rare opportunity,” says Thomas Thesen, director of the NYU ECoG Center and co-author of the study.

“As such, they offer unparalleled spatial and temporal resolution over other imaging technologies to help us achieve a better understanding of complex and uniquely human brain functions, such as language,” adds Thesen, an assistant professor at NYU Langone.

In their examination, the researchers tested the parts of the brain that were used during speech. Here, the study’s subjects were asked to repeat two “non-words”—“kig” and “pob.” Using non-words as a prompt to gauge neurological activity, the researchers were able to isolate speech from language.

An analysis of brain activity as patients engaged in speech tasks showed that both sides of the brain were used—that is, speech is, in fact, bi-lateral.

“Now that we have greater insights into the connection between the brain and speech, we can begin to develop new ways to aid those trying to regain the ability to speak after a stroke or injuries resulting in brain damage,” observes Pesaran. “With this greater understanding of the speech process, we can retool rehabilitation methods in ways that isolate speech recovery and that don’t involve language.”

(Source: nyu.edu)

Filed under speech language brain activity neuroimaging neuroscience science

288 notes

Babbling babies – responding to one-on-one ‘baby talk’ – master more words
Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development isn’t so much the quantity of words as the style of speech and social context in which speech occurs.
Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.
“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.
The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.
(Listen to a mother use baby talk with her child)
“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.
The findings will be published in an upcoming issue of the journal Developmental Science.
Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.
When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.
The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.
“Some parents produce baby talk naturally and they don’t realize they’re benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut. “Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”
Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.
“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.
Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.
“It’s not just talk, talk, talk at the child,” said Kuhl. “It’s more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”

Babbling babies – responding to one-on-one ‘baby talk’ – master more words

Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development isn’t so much the quantity of words as the style of speech and social context in which speech occurs.

Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.

“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.

The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.

(Listen to a mother use baby talk with her child)

“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.

The findings will be published in an upcoming issue of the journal Developmental Science.

Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.

When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.

The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.

“Some parents produce baby talk naturally and they don’t realize they’re benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut. “Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”

Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.

“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.

Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.

“It’s not just talk, talk, talk at the child,” said Kuhl. “It’s more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”

Filed under language development speech learning baby talk psychology neuroscience science

137 notes

Speech recovery after stroke
With right-handed people, it is positioned in the left side of the brain; left-handed people have it (usually) in the right side: the location of speech production has been known for quite some time. But it is not that simple, states psychologist Gesa Hartwigsen, Professor at Kiel University. In her current scientific publication, published in the magazine Proceedings of the National Academy of Science of the USA (PNAS), she investigates which areas in the brain really are in charge of speech, and how these interact. Her findings are supposed to help patients who have speech production problems or aphasia following a stroke.
Comprehending & Speaking
Gesa Hartwigsen and her team started by analysing speech production. They let healthy right-handed test persons listen to words, which they should then repeat. “These were pseudo words such as `beudo`. In German, they don’t have any associated meaning. Therefore, when hearing and repeating these words, no areas of the brain that had a connection to the meaning of what had been heard were activated”, said Hartwigsen.
The psychologist applies a combination of non-invasive methods (fMRI– functional magnetic resonance imaging and TMS – transcranial magnetic stimulation) to deduce what happens in the brain during the test. “We thus proved that the left hemisphere, as expected, was activated during speech production, while the right hemisphere did not actively contribute to language function”, explains Hartwigsen. This is the regular functionality within a healthy brain. From these results as well as others, scientists had up to now deduced that the right hemisphere did not contribute to speech production in the healthy system and was therefore suppressed.
Interfering & Measuring
With a second test, the Kiel University scientists simulated a dysfunction in the brain comparable to a stroke. A magnetic coil transmits a current pulse that interrupts the function of the area responsible for producing speech (Broca’s Area) in the left hemisphere. This completely harmless method influences the speech production of the volunteers for about 30 to 45 minutes. “During this period, the ability to listen and repeat was tested again. While we observed a suppressed activity in the left hemisphere during repeating, with some test persons taking longer to repeat the pseudo words, we also found unexpected activities in the right hemisphere”, reports Hartwigsen.
The right hemisphere showed increased activity during pseudo word repetition. The more the activity in the right Borca’s Area increased, the faster the volunteers were able to solve their speech tests. The right hemisphere also increased its facilitatory influence on the right hemisphere, a finding that was not observed prior to the TMS-induced lesion. “This reaction lends further support to the notion that the right hemisphere area reacts to the dysfunction of the left hemisphere and tries to compensate for the lesion.” Does the right hemisphere have a supporting influence and does it play an active role in speech production? So far, the common opinion was that it does not.
Result & Outlook
The findings of Gesa Hartwigsen and her team show an interaction of both hemispheres during speech repetition. When the left hemisphere is suppressed for example by a stroke, the right hemisphere could actively facilitate speech production. “By stimulating the right hemisphere, it could be possible to support speech recovery”, speculates the scientist. Here, timing would be very important. “Right after a stroke, we could support the right hemisphere. But when the remaining areas of the left hemisphere are ready to do their work again, it might be more helpful if the right hemisphere was suppressed. During this phase, we could stimulate the left hemisphere instead. The correct timing can therefore be crucial for recovery of speech after a stroke.”
In collaboration with the Department of Neurology at Kiel University, a stroke specialist from Leipzig and doctoral students of Medicine and Psychology, Gesa Hartwigsen has started a follow-up study on the recent publication. “We would like to find out more about the collaboration of the hemispheres and the right timing in helping stroke patients to recover”, says Hartwigsen. Her field of research is fairly new within the cognitive neuroscience. Nevertheless, she is positive that it will offer practical help in the form of concrete therapies within the next ten to fifteen years.

Speech recovery after stroke

With right-handed people, it is positioned in the left side of the brain; left-handed people have it (usually) in the right side: the location of speech production has been known for quite some time. But it is not that simple, states psychologist Gesa Hartwigsen, Professor at Kiel University. In her current scientific publication, published in the magazine Proceedings of the National Academy of Science of the USA (PNAS), she investigates which areas in the brain really are in charge of speech, and how these interact. Her findings are supposed to help patients who have speech production problems or aphasia following a stroke.

Comprehending & Speaking

Gesa Hartwigsen and her team started by analysing speech production. They let healthy right-handed test persons listen to words, which they should then repeat. “These were pseudo words such as `beudo`. In German, they don’t have any associated meaning. Therefore, when hearing and repeating these words, no areas of the brain that had a connection to the meaning of what had been heard were activated”, said Hartwigsen.

The psychologist applies a combination of non-invasive methods (fMRI– functional magnetic resonance imaging and TMS – transcranial magnetic stimulation) to deduce what happens in the brain during the test. “We thus proved that the left hemisphere, as expected, was activated during speech production, while the right hemisphere did not actively contribute to language function”, explains Hartwigsen. This is the regular functionality within a healthy brain. From these results as well as others, scientists had up to now deduced that the right hemisphere did not contribute to speech production in the healthy system and was therefore suppressed.

Interfering & Measuring

With a second test, the Kiel University scientists simulated a dysfunction in the brain comparable to a stroke. A magnetic coil transmits a current pulse that interrupts the function of the area responsible for producing speech (Broca’s Area) in the left hemisphere. This completely harmless method influences the speech production of the volunteers for about 30 to 45 minutes. “During this period, the ability to listen and repeat was tested again. While we observed a suppressed activity in the left hemisphere during repeating, with some test persons taking longer to repeat the pseudo words, we also found unexpected activities in the right hemisphere”, reports Hartwigsen.

The right hemisphere showed increased activity during pseudo word repetition. The more the activity in the right Borca’s Area increased, the faster the volunteers were able to solve their speech tests. The right hemisphere also increased its facilitatory influence on the right hemisphere, a finding that was not observed prior to the TMS-induced lesion. “This reaction lends further support to the notion that the right hemisphere area reacts to the dysfunction of the left hemisphere and tries to compensate for the lesion.” Does the right hemisphere have a supporting influence and does it play an active role in speech production? So far, the common opinion was that it does not.

Result & Outlook

The findings of Gesa Hartwigsen and her team show an interaction of both hemispheres during speech repetition. When the left hemisphere is suppressed for example by a stroke, the right hemisphere could actively facilitate speech production. “By stimulating the right hemisphere, it could be possible to support speech recovery”, speculates the scientist. Here, timing would be very important. “Right after a stroke, we could support the right hemisphere. But when the remaining areas of the left hemisphere are ready to do their work again, it might be more helpful if the right hemisphere was suppressed. During this phase, we could stimulate the left hemisphere instead. The correct timing can therefore be crucial for recovery of speech after a stroke.”

In collaboration with the Department of Neurology at Kiel University, a stroke specialist from Leipzig and doctoral students of Medicine and Psychology, Gesa Hartwigsen has started a follow-up study on the recent publication. “We would like to find out more about the collaboration of the hemispheres and the right timing in helping stroke patients to recover”, says Hartwigsen. Her field of research is fairly new within the cognitive neuroscience. Nevertheless, she is positive that it will offer practical help in the form of concrete therapies within the next ten to fifteen years.

Filed under stroke speech speech production aphasia broca's area psychology neuroscience science

90 notes

Genetic Defect Keeps Verbal Cues From Hitting the Mark

A genetic defect that profoundly affects speech in humans also disrupts the ability of songbirds to sing effective courtship tunes. This defect in a gene called FoxP2 renders the brain circuitry insensitive to feel-good chemicals that serve as a reward for speaking the correct syllable or hitting the right note, a recent study shows. 

image

The research, which was conducted in adult zebrafinches, gives insight into how this genetic mutation impairs a network of nerve cells to cause the stuttering and stammering typical of people with FoxP2 mutations. It appears Nov. 21 in an early online edition of the journal Neuron.

"Our results integrate a lot of different observations that have accrued on the FoxP2 mutation and cast a different perspective on what this mutation is doing," said Richard Mooney, Ph.D., the George Barth Geller professor of neurobiology at Duke University School of Medicine and a member of the Duke Institute for Brain Sciences. "FoxP2 mutations do not simply result in a cognitive or learning deficit, but also produce an ongoing motor deficit. Individuals with these mutations can still learn and can still improve; it is just harder for them to reliably hit the right mark." 

About 15 years ago, researchers discovered a British family with many members suffering from severe speech and language deficits. Geneticists eventually pinned down the culprit — a gene called forkhead box transcription factor or FoxP2 — that was mutated in all the affected individuals. The discovery led many to believe FoxP2 was a “language gene” that granted humans the ability to speak. But further studies showed that the gene wasn’t unique to humans, and in fact was conserved among all vertebrates, including songbirds. 

Though the gene is present in every cell, it is “active,” or turned on, mostly in brain cells, particularly ones residing in a region deep within the brain known as the basal ganglia. This region is dysfunctional in Tourette syndrome, known for its vocal tics and outbursts, and is also shrunk in individuals with FoxP2 mutations. 

To explore the complex circuitry involved in these deficits, Mooney and his former graduate student Malavika Murugan, Ph.D., decided to replicate the human mutation in this region of the brain in songbirds. Zebrafinches start learning how to sing 30 days after they hatch, listening to a male tutor and then practicing thousands of times a day until, 60 days later, they are able to make a very good copy of the tutor’s song. As good as that copy is at day 90, the male finch’s song gets even more precise when he “directs” it to a female as part of courtship. 

To investigate the role of FoxP2 in the generation of this directed song, Murugan introduced specifically targeted sequences of RNA to suppress FoxP2 activity in the basal ganglia of male zebrafinches. The birds were placed in a glass cage that revealed a female sitting on the other side. Murugan then recorded sonograms of their song to capture the subtle vocal variations indistinguishable to the human ear when they produced directed songs at the female. 

Murugan found that though the genetically manipulated males had already learned how to sing, their ability to hit the right note repeatedly in the presence of a female — a behavior critical to attracting a mate — was subpar. This indicates that in songbirds, FoxP2 has an ongoing role in vocal control separate from a role in learning, a distinction that has not been possible to make in humans with FOXP2 mutations. 

Having deduced the behavior associated with this genetic mutation, the researchers then identified underlying neural deficits by recording brain activity in birds with normal and altered FoxP2 genes. In one set of experiments, Murugan sent an electrical signal into the input side of the basal ganglia pathway and then used an electrode on the output side to measure how quickly the signal traveled from one side to the other. Surprisingly, the signal moved more quickly through the basal ganglia of FoxP2 mutant songbirds than it did in songbirds with the functional gene. 

Murugan also found that dopamine, an important brain chemical involved in brain signaling and the reinforcement of learned behaviors like singing or playing sports, could influence how fast basal ganglia signals propagated in birds with normal but not mutant forms of FoxP2.  

"This switch between undirected and directed song is actually dependent on the influx of this neurotransmitter called dopamine," said Murugan, first author of the study. "So what we think is happening is knocking down FoxP2 makes the male incapable of reducing song variability in the presence of a female. An adult male sees the female, there is an influx of dopamine, but because the system is insensitive, the dopamine has no effect and the adult male continues to sing a variable tune." In juveniles, the inability to constrain variability and to respond to dopamine could also account for poor learning.

Though the researchers are cautious not to draw too many parallels between their findings in birds and the deficits in humans, they think their study does highlight the value of songbirds in studying human behaviors and disease.

"Birds are one of the few non-human animals that learn to vocalize," said Mooney. "They produce songs for courtship that they culturally transmit from one generation to the next. Their brains might be a thousandth the size of ours, but in this one dimension, vocal learning, they are our equal."

(Source: today.duke.edu)

Filed under FoxP2 speech genetic mutation songbirds basal ganglia dopamine neuroscience science

free counters