Neuroscience

Articles and news from the latest research reports.

Posts tagged language development

312 notes

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak
In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.
The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience. 
The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 
“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 
Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.
“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”
Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.
Looking for lasting improvement in language skills
“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  
The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  
Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”
Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Improving Babies’ Language Skills Before They’re Even Old Enough to Speak

In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.

The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience.

The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns. 

“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.” 

Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.

“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”

Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.

Looking for lasting improvement in language skills

“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”  

The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.  

Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily.  Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”

Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.

Filed under language language development EEG cognitive function sound processing neuroscience science

173 notes

Infant Cooing, Babbling Linked to Hearing Ability
Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.
“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”
Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.
“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”
Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.
“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”
In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.
Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Infant Cooing, Babbling Linked to Hearing Ability

Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.

“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”

Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.

“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”

Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.

“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”

In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.

Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Filed under hearing cochlear implant vocalizations language development psychology neuroscience science

115 notes

Presence or absence of early language delay alters anatomy of the brain in autism
A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.
The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.
Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.
The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).
Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.
“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”
He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”
Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.
“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”
“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Presence or absence of early language delay alters anatomy of the brain in autism

A new study led by researchers from the University of Cambridge has found that a common characteristic of autism – language delay in early childhood – leaves a ‘signature’ in the brain. The results are published today (23 September) in the journal Cerebral Cortex.

The researchers studied 80 adult men with autism: 38 who had delayed language onset and 42 who did not. They found that language delay was associated with differences in brain volume in a number of key regions, including the temporal lobe, insula, ventral basal ganglia, which were all smaller in those with language delay; and in brainstem structures, which were larger in those with delayed language onset.

Additionally, they found that current language function is associated with a specific pattern of grey and white matter volume changes in some key brain regions, particularly temporal, frontal and cerebellar structures.

The Cambridge researchers, in collaboration with King’s College London and the University of Oxford, studied participants who were part of the MRC Autism Imaging Multicentre Study (AIMS).

Delayed language onset – defined as when a child’s first meaningful words occur after 24 months of age, or their first phrase occurs after 33 months of age – is seen in a subgroup of children with autism, and is one of the clearest features triggering an assessment for developmental delay in children, including an assessment of autism.

“Although people with autism share many features, they also have a number of key differences,” said Dr Meng-Chuan Lai of the Cambridge Autism Research Centre, and the paper’s lead author. “Language development and ability is one major source of variation within autism. This new study will help us understand the substantial variety within the umbrella category of ‘autism spectrum’. We need to move beyond investigating average differences in individuals with and without autism, and move towards identifying key dimensions of individual differences within the spectrum.”

He added: “This study shows how the brain in men with autism varies based on their early language development and their current language functioning. This suggests there are potentially long-lasting effects of delayed language onset on the brain in autism.”

Last year, the American Psychiatric Association removed Asperger Syndrome (Asperger’s Disorder) as a separate diagnosis from its diagnostic manual (DSM-5), and instead subsumed it within ‘autism spectrum disorder.’ The change was one of many controversial decisions in DSM-5, the main manual for diagnosing psychiatric conditions.

“This new study shows that a key feature of Asperger Syndrome, the absence of language delay, leaves a long lasting neurobiological signature in the brain,” said Professor Simon Baron-Cohen, senior author of the study. “Although we support the view that autism lies on a spectrum, subgroups based on developmental characteristics, such as Asperger Syndrome, warrant further study.”

“It is important to note that we found both differences and shared features in individuals with autism who had or had not experienced language delay,” said Dr Lai. “When asking: ‘Is autism a single spectrum or are there discrete subgroups?’ - the answer may be both.”

Filed under autism language language development brain volume individual differences neuroscience science

542 notes

Months before their first words, babies’ brains rehearse speech mechanics
Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.
University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.
The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.
“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”
Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.
The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.
“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”
In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.
The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.
The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.
This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.
In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.
The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”
“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

Months before their first words, babies’ brains rehearse speech mechanics

Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.

University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.

The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.

“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”

Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.

The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.

“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”

In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.

The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.

The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.

This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.

In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.

The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”

“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

Filed under infants speech speech perception language development brain activity psychology neuroscience science

157 notes

Gender and genes play an important role in delayed language development

Boys are at greater risk for delayed language development than girls, according to a new study using data from the Norwegian Mother and Child Cohort Study. The researchers also found that reading and writing difficulties in the family gave an increased risk.

image

“We show for the first time that reading and writing difficulties in the family can be the main reason why a child has a speech delay that first begins between three to five years of age,” says Eivind Ystrøm, senior researcher at the Norwegian Institute of Public Health.

Ystrøm was supervisor of Imac Maria Zambrana, a former PhD student at the Norwegian Institute of Public Health who conducted the research in this study as part of her doctoral research.

The researchers used data from questionnaires completed by the mothers who are participating in the Norwegian Mother and Child Cohort Study (MoBa). The study included more than 10,000 children from week 17 of pregnancy up to five years of age.

“MoBa is a large study with a normal cross-section of the population. It gives us a unique opportunity to examine changes over time, the scope and any risk factors for delayed language development,” says Ystrøm.

Mostly boys

The researchers classified the language difficulties at three and five years of age in three groups: persistent delayed language development (present at both times), transient delayed language development (only present at three years) and delayed language development first identified at five years old.

Boys are in the majority for the groups with persistent and transient language difficulties. Ystrøm explains that boys are biologically at greater risk for developmental disorders in utero than girls. British scientists have measured the male sex hormone (testosterone) in amniotic fluid and they found that the levels were related to the development of both autism and language disorders. Ystrøm points out that boys are generally a little later in language development than girls, but that most catch up during the first year. Therefore, many boys could be at risk of persistent language impairment and increasingly have transient language difficulties that disappear before school age.

The researchers found that gender was irrelevant for the third group who have language difficulties that begin sometime between three and five years of age.

Hereditary factors

We have good knowledge about normal language development in children. Many genes are important for language development and research suggests that different genes are involved in different types of language difficulty.

“Reading and writing difficulties in the family are the predominant risk factors for late-onset language difficulties. We see no language problems when the child is between 18 months and three years old. They are latent” says Ystrøm.

The researchers believe that both specific genes and factors in the child’s external environment can lead to delays in language development at three to five years of age.

What can we do?

Ystrøm believes that children with delayed language development must be identified as early as possible. Parents, health care workers and child care staff should be aware of the language development of children and encourage an enabling language environment, in some cases with specially adapted measures. In particular, they must be aware of children who have sustained disabilities, or who have had normal language development up to three years and then unexpectedly began to have difficulties.

“Professionals and caregivers must be vigilant. It is difficult to detect language difficulties when language becomes more complex in older children. They must be trained so that they are confident in how to spot language difficulties and how to encourage a child’s language. We need more research into the needs of children with different trajectories”, says Ystrøm.

Parents who are concerned about their child’s language development should consult their doctor. They should also raise the issue at the regular check-ups at the health clinic when the child is between two and four years old.

“The checks must take place at the appropriate time. It is important that they are not delayed or not implemented at all,” says Ystrøm.

A few years ago, a survey by the Health and Welfare Department in Oslo showed that few of the health centres in Oslo met the required 14 consultations for each child from birth to school stipulated by the Norwegian Directorate of Health.

Further research

In addition to researchers at the Norwegian Institute of Public Health, researchers at the University of Oslo and the University of Melbourne in Australia participated in this study. The work is funded by the Extra Foundation for Health and Rehabilitation.

“We hope to continue this research and specifically look at the relationship between gender and language. We need more research into the needs of children with various types of language delay”, says Eivind Ystrøm.

Reference

Zambrana, IM, Pons, F., Eadie, P. and Ystrom, E. (2013). Trajectories of language delay from age 3 to 5: persistence, recovery and late onset. International Journal of Language & Communication

(Source: fhi.no)

Filed under language development language difficulties individual differences genetics neuroscience science

288 notes

Babbling babies – responding to one-on-one ‘baby talk’ – master more words
Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development isn’t so much the quantity of words as the style of speech and social context in which speech occurs.
Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.
“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.
The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.
(Listen to a mother use baby talk with her child)
“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.
The findings will be published in an upcoming issue of the journal Developmental Science.
Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.
When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.
The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.
“Some parents produce baby talk naturally and they don’t realize they’re benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut. “Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”
Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.
“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.
Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.
“It’s not just talk, talk, talk at the child,” said Kuhl. “It’s more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”

Babbling babies – responding to one-on-one ‘baby talk’ – master more words

Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development isn’t so much the quantity of words as the style of speech and social context in which speech occurs.

Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.

“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.

The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.

(Listen to a mother use baby talk with her child)

“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.

The findings will be published in an upcoming issue of the journal Developmental Science.

Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.

When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.

The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.

“Some parents produce baby talk naturally and they don’t realize they’re benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut. “Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”

Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.

“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.

Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.

“It’s not just talk, talk, talk at the child,” said Kuhl. “It’s more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”

Filed under language development speech learning baby talk psychology neuroscience science

207 notes

iPads help late-speaking children with autism develop language
The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.
In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.
“For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”
Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.
The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”
As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.
Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.
In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.
In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.
In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.
Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.

iPads help late-speaking children with autism develop language

The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.

In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.

For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”

Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.

The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”

As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.

Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.

In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.

In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.

In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.

Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.

Filed under autism ASD language language development communication psychology neuroscience science

66 notes

Gene Found To Foster Synapse Formation In The Brain

Researchers at Johns Hopkins say they have found that a gene already implicated in human speech disorders and epilepsy is also needed for vocalizations and synapse formation in mice. The finding, they say, adds to scientific understanding of how language develops, as well as the way synapses — the connections among brain cells that enable us to think — are formed. A description of their experiments appears in Science Express on Oct. 31.

image

A group led by Richard Huganir, Ph.D., director of the Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, set out to investigate genes involved in synapse formation. Gek-Ming Sia, Ph.D., a research associate in Huganir’s laboratory, first screened hundreds of human genes for their effects on lab-grown mouse brain cells. When one gene, SRPX2, was turned up higher than normal, it caused the brain cells to erupt with new synapses, Sia found.

When Huganir’s team injected fetal mice with an SRPX2-blocking compound, the mice showed fewer synapses than normal mice even as adults, the researchers found. In addition, when SRPX2-deficient mouse pups were separated from their mothers, they did not emit high-pitched distress calls as other pups do, indicating they lacked the rodent equivalent of early language ability.

Other researchers’ analyses of the human genome have found that mutations in SRPX2 are associated with language disorders and epilepsy, and when Huganir’s team injected the human SRPX2 with the same mutations into the fetal mice, they also had deficits in their vocalization as young pups.

Another research group at Institut de Neurobiologie de la Méditerranée in France had previously shown that SRPX2 interacts with FoxP2, a gene that has gained wide attention for its apparently crucial role in language ability.

Huganir’s team confirmed this, showing that FoxP2 controls how much protein the SRPX2 gene makes and may affect language in this way. “FoxP2 is famous for its role in language, but it’s actually involved in other functions as well,” Huganir comments. “SRPX2 appears to be more specialized to language ability.” Huganir suspects that the gene may also be involved in autism, since autistic patients often have language impairments, and the condition has been linked to defects in synapse formation.

This study is only the beginning of teasing out how SRPX2 acts on the brain, Sia says. “We’d like to find out what other proteins it acts on, and how exactly it regulates synapses and enables language development.”

Filed under synapses language development autism epilepsy genetics neuroscience science

210 notes

Learning dialects shapes brain areas that process spoken language
Using advanced imaging to visualize brain areas used for understanding language in native Japanese speakers, a new study from the RIKEN Brain Science Institute finds that the pitch-accent in words pronounced in standard Japanese activates different brain hemispheres depending on whether the listener speaks standard Japanese or one of the regional dialects.

In the study published in the journal Brain and Language, Drs. Yutaka Sato, Reiko Mazuka and their colleagues examined if speakers of a non-standard dialect used the same brain areas while listening to spoken words as native speakers of the standard dialect or as someone who acquired a second language later in life.
When we hear language our brain dissects the sounds to extract meaning. However, two people who speak the same language may have trouble understanding each other due to regional accents, such as Australian and American English. In some languages, such as Japanese, these regional differences are more pronounced than an accent and are called dialects.
Unlike different languages that may have major differences in grammar and vocabulary, the dialects of a language usually differ at the level of sounds and pronunciation. In Japan, in addition to the standard Japanese dialect, which uses a pitch-accent to distinguish identical words with different meanings, there are other regional dialects that do not.
Similar to the way that a stress in an English word can change its meaning, such as “pro’duce” and “produ’ce”, identical words in the standard Japanese language have different meanings depending on the pitch-accent. The syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.
The experimental task was designed to test the participants’ responses when they distinguish three types of word pairs: (1) words such as /ame’/ (candy) versus /kame/ (jar) that differ in one sound, (2) words such as /ame’/ (candy) versus /a’me/ (rain) that differ in their pitch accent, and (3) words such as ‘ame’ (candy in declarative intonation) and /ame?/ (candy in a question intonation).
RIKEN neuroscientists used Near Infrared Spectroscopy (NIRS) to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers. This non-invasive way to visualize brain activity is based on the fact that when a brain area is active, blood supply increases locally in that area and this increase can be detected with an infrared laser.
It is known that pitch changes activate both hemispheres, whereas word meaning is preferentially associated with the left-hemisphere. When the participants heard the word pair that differed in pitch-accent, /ame’/ (candy) vs /a’me/ (rain), the left hemisphere was predominantly activated in standard dialect speakers, whereas in accent-less dialect speakers did not show the left-dominant activation. Thus, standard Japanese speakers use the pitch-accent to understand the word meaning. However, accent-less dialect speakers process pitch changes similar to individuals who learn a second language later in life.
The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect. “Our study reveals that an individual’s language experience at a young age can shape the way languages are processed in the brain,” comments Dr. Sato. “Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language.”

Learning dialects shapes brain areas that process spoken language

Using advanced imaging to visualize brain areas used for understanding language in native Japanese speakers, a new study from the RIKEN Brain Science Institute finds that the pitch-accent in words pronounced in standard Japanese activates different brain hemispheres depending on whether the listener speaks standard Japanese or one of the regional dialects.

In the study published in the journal Brain and Language, Drs. Yutaka Sato, Reiko Mazuka and their colleagues examined if speakers of a non-standard dialect used the same brain areas while listening to spoken words as native speakers of the standard dialect or as someone who acquired a second language later in life.

When we hear language our brain dissects the sounds to extract meaning. However, two people who speak the same language may have trouble understanding each other due to regional accents, such as Australian and American English. In some languages, such as Japanese, these regional differences are more pronounced than an accent and are called dialects.

Unlike different languages that may have major differences in grammar and vocabulary, the dialects of a language usually differ at the level of sounds and pronunciation. In Japan, in addition to the standard Japanese dialect, which uses a pitch-accent to distinguish identical words with different meanings, there are other regional dialects that do not.

Similar to the way that a stress in an English word can change its meaning, such as “pro’duce” and “produ’ce”, identical words in the standard Japanese language have different meanings depending on the pitch-accent. The syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.

The experimental task was designed to test the participants’ responses when they distinguish three types of word pairs: (1) words such as /ame’/ (candy) versus /kame/ (jar) that differ in one sound, (2) words such as /ame’/ (candy) versus /a’me/ (rain) that differ in their pitch accent, and (3) words such as ‘ame’ (candy in declarative intonation) and /ame?/ (candy in a question intonation).

RIKEN neuroscientists used Near Infrared Spectroscopy (NIRS) to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers. This non-invasive way to visualize brain activity is based on the fact that when a brain area is active, blood supply increases locally in that area and this increase can be detected with an infrared laser.

It is known that pitch changes activate both hemispheres, whereas word meaning is preferentially associated with the left-hemisphere. When the participants heard the word pair that differed in pitch-accent, /ame’/ (candy) vs /a’me/ (rain), the left hemisphere was predominantly activated in standard dialect speakers, whereas in accent-less dialect speakers did not show the left-dominant activation. Thus, standard Japanese speakers use the pitch-accent to understand the word meaning. However, accent-less dialect speakers process pitch changes similar to individuals who learn a second language later in life.

The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect. “Our study reveals that an individual’s language experience at a young age can shape the way languages are processed in the brain,” comments Dr. Sato. “Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language.”

Filed under language language development learning brain mapping neuroscience science

197 notes

From the mouths of babes – The truth about toddler talk

The sound of small children chattering has always been considered cute – but not particularly sophisticated. However, research by a Newcastle University expert has shown their speech is far more advanced than previously understood.

image

Dr Cristina Dye, a lecturer in child language development, found that two to three- year-olds are using grammar far sooner than expected.

She studied fifty French speaking youngsters aged between 23 and 37 months, capturing tens of thousands of their utterances.

Dr Dye, who carried out the research while at Cornell University in the United States, found that the children were using ‘little words’ which form the skeleton of sentences such as a, an, can, is, an, far sooner than previously thought.

Dr Dye and her team used advanced recording technology including highly sensitive microphones placed close to the children, to capture the precise sounds the children voiced. They spent years painstakingly analysing every minute sound made by the toddlers and the context in which it was produced.

They found a clear, yet previously undetected, pattern of sounds and puffs of air, which consistently replaced grammatical words in many of the children’s utterances.

Dr Dye said: “Many of the toddlers we studied made a small sound, a soft breath, or a pause, at exactly the place that a grammatical word would normally be uttered.” 

“The fact that this sound was always produced in the correct place in the sentence leads us to believe that young children are knowledgeable of grammatical words. They are far more sophisticated in their grammatical competence than we ever understood.

“Despite the fact the toddlers we studied were acquiring French, our findings are expected to extend to other languages. I believe we should give toddlers more credit – they’re much more amazing than we realised.”

For decades the prevailing view among developmental specialists has been that children’s early word combinations are devoid of grammatical words. On this view, children then undergo a ‘tadpole to frog’ transformation where due to an unknown mechanism, they start to develop grammar in their speech. Dye’s results now challenge the old view.

Dr Dye said: “The research sheds light on a really important part of a child’s development. Language is one of the things that makes us human and understanding how we acquire it shows just how amazing children are.

“There are also implications for understanding language delay in children. When children don’t learn to speak normally it  can lead to serious issues later in life. For example, those who have it are more likely to suffer from mental illness or be unemployed later in life. If we can understand what is ‘normal’ as early as possible then we can intervene sooner to help those children.”

The research was originally published in the Journal of Linguistics.

(Source: ncl.ac.uk)

Filed under language development speech toddlers grammar auxiliaries semantics neuroscience psychology science

free counters