Neuroscience

Articles and news from the latest research reports.

Posts tagged language

86 notes

Ultra-high-field MRI reveals language centres in the brain in much more detail
In a new investigation by the University Department of Neurology, it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed much more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. This helps to protect these areas more effectively during brain surgery and avoid accidentally damaging it.
Before brain surgery, it is important to precisely understand the areas of the brain required for language in order to avoid injuring them during the procedure. Their position can shift considerably, especially in patients with tumours or brain injuries. The brain’s flexibility also means that language centres can shift to other regions. If the areas responsible for language control and processing are injured during a brain operation, the patient can be left unable to communicate. In order to create a “map” of the language control centres prior to the operation, functional magnetic resonance imaging (fMRI) is used these days.
A multi-centre study from 2013 demonstrated the advantages of fMRI-assisted localisation of the motor centres in the brain. In a new investigation by the working group led by Roland Beisteiner (University Department of Neurology), it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed even more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. The focus lies on the two most important language centres in the brain known as Wernicke’s area (which controls the understanding of language) and Broca’s area (which controls the motor functions involved with speech).
The brain is scanned for activity while the patient is carrying out speech exercises. This allows the areas required for speech to be localised much more accurately than previously. “Ultra-high-field MR offers much greater sensitivity than classic MRI scanners”, explains Roland Beisteiner, “allowing even very weak signals to be recorded in areas that would otherwise have been missed.”

Ultra-high-field MRI reveals language centres in the brain in much more detail

In a new investigation by the University Department of Neurology, it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed much more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. This helps to protect these areas more effectively during brain surgery and avoid accidentally damaging it.

Before brain surgery, it is important to precisely understand the areas of the brain required for language in order to avoid injuring them during the procedure. Their position can shift considerably, especially in patients with tumours or brain injuries. The brain’s flexibility also means that language centres can shift to other regions. If the areas responsible for language control and processing are injured during a brain operation, the patient can be left unable to communicate. In order to create a “map” of the language control centres prior to the operation, functional magnetic resonance imaging (fMRI) is used these days.

A multi-centre study from 2013 demonstrated the advantages of fMRI-assisted localisation of the motor centres in the brain. In a new investigation by the working group led by Roland Beisteiner (University Department of Neurology), it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed even more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. The focus lies on the two most important language centres in the brain known as Wernicke’s area (which controls the understanding of language) and Broca’s area (which controls the motor functions involved with speech).

The brain is scanned for activity while the patient is carrying out speech exercises. This allows the areas required for speech to be localised much more accurately than previously. “Ultra-high-field MR offers much greater sensitivity than classic MRI scanners”, explains Roland Beisteiner, “allowing even very weak signals to be recorded in areas that would otherwise have been missed.”

Filed under neuroimaging fMRI brain activity language neuroscience science

234 notes

The pleasure of learning new words
From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.
In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.
Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.
According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”. 
Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.
Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.
“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 
The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.
Experiment with words and gambling
Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task. 
The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

The pleasure of learning new words

From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.

In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.

Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.

According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”.

Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.

Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.

“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 

The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.

Experiment with words and gambling

Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task.

The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

Filed under language acquisition language striatum brain activity neuroscience science

312 notes

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak
In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.
The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience. 
The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 
“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 
Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.
“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”
Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.
Looking for lasting improvement in language skills
“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  
The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  
Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”
Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak

In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.

The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience.

The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 

“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 

Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.

“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”

Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.

Looking for lasting improvement in language skills

“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  

The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  

Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”

Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Filed under language language development EEG cognitive function sound processing neuroscience science

115 notes

Presence or absence of early language delay alters anatomy of the brain in autism
A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.
The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.
Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.
The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).
Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.
“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”
He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”
Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.
“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”
“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Presence or absence of early language delay alters anatomy of the brain in autism

A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.

The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.

Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.

The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).

Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.

“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”

He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”

Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.

“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”

“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Filed under autism language language development brain volume individual differences neuroscience science

190 notes

Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.
Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.
The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.
Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.
All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.
In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.
Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.
The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.
Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.
The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.
Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.
This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Neuroscientists identify key role of language gene

Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Filed under Foxp2 gene mutation language language acquisition speech learning neuroscience science

64 notes

Study First to Use Brain Scans to Forecast Early Reading Difficulties

UC San Francisco researchers have used brain scans to predict how young children learn to read, giving clinicians a possible tool to spot children with dyslexia and other reading difficulties before they experience reading challenges.

image

In the United States, children usually learn to read for the first time in kindergarten and become proficient readers by third grade, according to the authors. In the study, researchers examined brain scans of 38 kindergarteners as they were learning to read formally at school and tracked their white matter development until third grade. The brain’s white matter is essential for perceiving, thinking and learning.

The researchers found that the developmental course of the children’s white matter volume predicted the kindergarteners’ abilities to read.

“We show that white matter development during a critical period in a child’s life, when they start school and learn to read for the very first time, predicts how well the child ends up reading,” said Fumiko Hoeft, MD, PhD, senior author and an associate professor of child and adolescent psychiatry at UCSF, and member of the UCSF Dyslexia Center.

The research is published online in Psychological Science.

Doctors commonly use behavioral measures of reading readiness for assessments of ability. Other measures such as cognitive (i.e. IQ) ability, early linguistic skills, measures of the environment such as socio-economic status, and whether there is a family member with reading problems or dyslexia are all common early factors used to assess risk of developing reading difficulties.

“What was intriguing in this study was that brain development in regions important to reading predicted above and beyond all of these measures,” said Hoeft.

The researchers removed the effects of these commonly used assessments when doing the statistical analyses in order to assess how the white matter directly predicted future reading ability. They found that left hemisphere white matter in the temporo-parietal region just behind and above the left ear — thought to be important for language, reading and speech — was highly predictive of reading acquisition beyond effects of genetic predisposition, cognitive abilities, and environment at the outset of kindergarten. Brain scans improved prediction accuracy by 60 percent better at predicting reading difficulties than the compared to traditional assessments alone. 

“Early identification and interventions are extremely important in children with dyslexia as well as most neurodevelopmental disorders,” said Hoeft. “Accumulation of research evidence such as ours may one day help us identify kids who might be at risk for dyslexia, rather than waiting for children to become poor readers and experience failure.”

According to the National Institute of Child and Human Development, as many as 15 percent of Americans have major trouble reading.

“Examining developmental changes in the brain over a critical period of reading appears to be a unique sensitive measure of variation and may add insight to our understanding of reading development in ways that brain data from one time point, and behavioral and environmental measures, cannot,” said Chelsea Myers, BS, lead author and lab manager in UCSF’s Laboratory for Educational NeuroScience. “The hope is that understanding each child’s neurocognitive profiles will help educators provide targeted and personalized education and intervention, particularly in those with special needs.”

(Source: ucsf.edu)

Filed under reading difficulties dyslexia white matter brain development language psychology neuroscience science

152 notes

People understand hyperbole through intent of communication

People tend to understand nonliteral language – metaphor, hyperbole and exaggerated statements – when they realize the purpose of the communication, according to new Stanford research.

Noah Goodman, an assistant professor of psychology at Stanford, believes that figurative language – the nuanced ways that people use language to communicate meanings different than the literal meaning of their words – is one of the deepest mysteries of human communication.

"Human communication," he said, "is rife with nonliteral language that includes metaphor, irony and hyperbole. When we say ‘Juliet is the sun’ or ‘That watch cost a million dollars,’ listeners read through the direct meanings – which are often false if taken literally – to understand subtle connotations."

image

'Sharp' vs. 'round' numbers

To understand this communication dynamic, Goodman, director of the Computation and Cognition Lab at Stanford, and his colleagues used computational modeling. Stanford graduate student Justine Kao was the first author on the paper, which included co-authors Jean Wu, a former graduate student at Stanford, and Leon Bergen of the Massachusetts Institute of Technology.

In their lab, they develop computational models that use pragmatic reasoning to interpret metaphorical utterances. Their research for this particular project involved four online experiments with 340 subjects.

Participants in the experiments read different scenarios involving hyperbole. For example, a person bought a watch and was asked by a friend whether it was expensive. That person responded with different figures, ranging from low- to high-cost figures – such as $50, $51, $10,000 or $10,001. Given this, the participants rated the probability of the purchaser thinking it was an expensive watch or not.

People tended to interpret “sharp numbers” – such as a watch costing $51 – more precisely than “round numbers,” as in a watch costing $50. 

The findings suggest that even creative and figurative language may follow predictable and rational principles.

Kao said, “This research advances our understanding of communication by providing evidence that reasoning about a speaker’s goals is critical for understanding nonliteral language. We were able to capture nuanced and nonliteral interpretations of number words using a computational model.”

Common ground

The research showed that if listeners are trying to understand the topic and goal of communication as well as the underlying subtext – that which is not expressed explicitly – they’re better able to truly understand the utterance. A sense of common knowledge about what is being described or expressed is also important. Speakers and listeners assume that individuals are rational agents who use common ground and reference points to best maximize information.

As Kao put it, “There is still a long way to go before computers can understand Shakespeare, but it is a start.”

Goodman offered this example: Imagine someone describing a new restaurant, and she says, “It took 30 minutes to get a table.” People are most likely to interpret this to mean she waited about 30 minutes. But if she says, “It took a million years to get a table,” people will probably interpret this to mean that the wait was shorter than a million years, but that the person thinks it was much too long.

"One of the most fascinating facts about communication is that people do not always mean what they say – a crucial part of the listener’s job is to understand an utterance even when its literal meaning is false," the researchers wrote.

Goodman said the computational model he and his colleagues use to understand nonliteral utterances integrates empirically measured background knowledge, communication principles and reasoning about communication goals.

What is next in line research-wise?

Goodman and the others said they believe that the same ideas and techniques can extend to metaphor, irony and many other uses of language. For example, they have a promising initial exploration of “is a” metaphors such as “your lawyer is a shark,” Goodman said.

"Beyond these cases of figurative speech, the overall mathematical framework is beginning to give a precise theory of natural language understanding that takes into account context, intention and many subtle shades of meaning," he said, adding, "There is a lot more work to do."

(Source: news.stanford.edu)

Filed under communication language nonliteral language hyperbole pragmatics psychology neuroscience science

291 notes

Brain imaging study examines second-language learning skills

With enough practice, some learners of a second language can process their new language as well as native speakers, research at the University of Kansas shows.

image

(Credit: bigstockphoto)

Using brain imaging, a trio of KU researchers was able to examine to the millisecond how the brain processes a second language. They then compared their findings with their previous results for native speakers and saw both followed similar patterns.

The research by Robert Fiorentino and Alison Gabriele, both associate professors in the linguistics department, and José Alemán Bañón, a former KU graduate student who is now a postdoctoral researcher at the University of Reading in the United Kingdom, was published this month in the journal Second Language Research.

For years, linguists have debated whether second-language learners would ever resemble native speakers in their ability to process language properties that differ between the first and second language, such as gender agreement, which is a property of Spanish but not English. In Spanish, all nouns are categorized as masculine or feminine, and various elements in the sentence, such as adjectives, need to carry the gender feature of the noun as well.

Some researchers argued that even those who spoke a second language with a high level of accuracy were using a qualitatively different mechanism than native speakers.

“We realized that these different theories proposing that either second-language learners use the same mechanism, or a different mechanism could actually be teased apart by using brain-imaging techniques,” Gabriele said.

The team studied 26 high-level Spanish speakers who hadn’t learned to speak Spanish until after age 11 and grew up with English as the majority language. The speakers used Spanish on a daily basis and had spent an average of a year and a half in a Spanish-speaking country.

They were compared with 24 native speakers, who were raised in Spanish-speaking countries and stayed in their home country until age 17.

To measure language processing as it happens, the team used a method known as electroencephalography (EEG), which uses an array of electrodes placed on the scalp to detect patterns of brain activity with high accuracy in timing.

Once hooked up to the EEG, the test subjects were asked to read sentences, some of which had grammatical errors in either number agreement or gender agreement.

The researchers then compared the results of the second-language learners to native speakers. They found that the highly proficient second-language speakers showed the same patterns of brain activity as native speakers when processing grammatical violations in sentences.

“We show that the learners’ brain activity looks qualitatively similar to that of native speakers, suggesting that they are using the same mechanisms,” Fiorentino said.

The study highlights the brain’s plasticity and its ability to acquire a new complex system even in adulthood.

“A lot of researchers have argued that there is some sort of language learning mechanism that might atrophy over the life span, particularly before puberty. And, we certainly have a lot of evidence that it is difficult to process your second language at nativelike levels and you have to go through quite a bit of effort to find people who can,” Gabriele said. “But I think what this paper shows is that it is possible.”

Gabriele and Fiorentino are working on a second phase of the research, studying how the brain processes a second language at the initial stages of exposure. Their preliminary results suggest that properties that are shared between the first and second language show patterns of brain activity that are very similar in learners and native speakers. This suggests that learners build on the representation for language that is already in place when learning a second language.

(Source: news.ku.edu)

Filed under language language acquisition brain imaging EEG brain activity neuroscience science

1,993 notes

Try, try again? Study says no
When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.
Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.
In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.
“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.
Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.
Too much brainpower
Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.
“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.
In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.
“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.
Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.
Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.
Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.
Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.
The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.
“This research is exciting because it provides evidence indicating that effortful learning leads to different results depending upon the kind of information learners are trying to master,” says Michael Ramscar, a professor of linguistics at the University of Tübingen who was not part of the research team. “The results indicate that learning to identify relatively simple parts of language, such as words, is facilitated by effortful learning, whereas learning more complex aspects of language, such as grammatical features, is impeded by effortful learning.”
Turning off effort
The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.
“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.
Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

“This research is exciting because it provides evidence indicating that effortful learning leads to different results depending upon the kind of information learners are trying to master,” says Michael Ramscar, a professor of linguistics at the University of Tübingen who was not part of the research team. “The results indicate that learning to identify relatively simple parts of language, such as words, is facilitated by effortful learning, whereas learning more complex aspects of language, such as grammatical features, is impeded by effortful learning.”

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

Filed under language learning procedural memory prefrontal cortex linguistics psychology neuroscience science

131 notes

Twin study suggests language delay due more to nature than nurture

A study of 473 sets of twins followed since birth found that compared with single-born children, 47 percent of 24-month-old identical twins had language delay compared with 31 percent of nonidentical twins. Overall, twins had twice the rate of late language emergence of single-born children. None of the children had disabilities affecting language acquisition.

image

The results of the study were published in the June 2014 Journal of Speech, Language, and Hearing Research.

University of Kansas Distinguished Professor Mabel Rice, lead author, said that all of the language traits analyzed in the study—vocabulary, combining words and grammar—were significantly heritable with genes accounting for about 43 percent of the overall twins’ deficit.

The “twinning effect” — a lower level of language performance for twins than single-born children — was expected to be comparable for both kinds of twins, but was greater for identical twins, said Rice, strengthening the case for the heritability of language development.

“This finding disputes hypotheses that attribute delays in early language acquisition of twins to mothers whose attention is reduced due to the demands of caring for two toddlers,” Rice said. “This should reassure busy parents who worry about giving sufficient individual attention to each child.”

However, said Rice, prematurity and birth complications, more common in identical twins, could also affect their higher rates of language delay. A study of pregnancy and birth risks for late talking in twins is currently under way by the study authors.

Further, the study will continue at least until 2017 to continue to follow the twins through preschool and school years up to adolescence to answer the question of whether late-talking twins do catch up to their peers.

“Twin studies provide unique opportunities to study inherited and environmental contributions to language acquisition,” Rice said. “The outcomes inform our understanding of how these influences contribute to language acquisition in single-born children as well.”

Late language emergence means that a child’s language is below age and gender expectations in the number of words they speak and combining two or more words into sentences. In this study, 71 percent of 2-year-old twins were not combining words compared with 17 percent of single-born children.

While previous behavioral genetics studies of toddlers have largely focused on vocabulary, the researchers introduced an innovative measure of early grammatical ability on the correct use of the past tense and the “to be” and “to do” verbs. The measure was inspired by the Rice/Wexler Test of Early Grammar Impairment, developed in 2001 by Rice and Kenneth Wexler, Massachusetts Institute of Technology professor. It was the first test to detect the subtle but common language disorder, specific language impairment.

Rice’s collaborators in the international longitudinal project that began in 2002 are Professors Cate Taylor and Stephen Zubrick from the Telethon Kids Institute in Perth, Western Australia, and Professor Shelley Smith at the University of Nebraska Medical Center.

The study population is located in the vicinity of Perth, Western Australia, because it is demographically identical to Kansas City and several other U.S. Midwestern states. But in Australian health records are available, and the Western Australia Twin Registry is a unique resource for researchers since it is a record of all multiple births, Rice said.

The research group has followed the development of 1,000 sets of Western Australian twins from their first words. In 2012, the group was granted $2.8 million by the National Institute for Deafness and Other Communication Disorders for a fourth five-year-cycle that will enable researchers to continue to monitor the twins as they develop through adolescence. In addition to formal language tests, researchers have collected genetic and environmental data as well as assessments with the twins’ siblings.

(Source: news.ku.edu)

Filed under language language acquisition heritability twinning effect neuroscience science

free counters