Neuroscience

Articles and news from the latest research reports.

Posts tagged language acquisition

234 notes

The pleasure of learning new words
From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.
In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.
Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.
According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”. 
Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.
Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.
“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 
The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.
Experiment with words and gambling
Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task. 
The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

The pleasure of learning new words

From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.

In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.

Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.

According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”.

Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.

Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.

“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 

The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.

Experiment with words and gambling

Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task.

The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

Filed under language acquisition language striatum brain activity neuroscience science

190 notes

Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.
Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.
The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.
Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.
All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.
In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.
Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.
The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.
Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.
The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.
Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.
This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Neuroscientists identify key role of language gene

Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Filed under Foxp2 gene mutation language language acquisition speech learning neuroscience science

156 notes

Hand gestures improve learning in both signers and speakers

Spontaneous gesture can help children learn, whether they use a spoken language or sign language, according to a new report.

image

Previous research by Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in the Department of Psychology, has found that gesture helps children develop their language, learning and cognitive skills. As one of the nation’s leading authorities on language learning and gesture, she has also studied how using gesture helps older children improve their mathematical skills.

Goldin-Meadow’s new study examines how gesturing contributes to language learning in hearing and in deaf children. She concludes that gesture is a flexible way of communicating, one that can work with language to communicate or, if necessary, can itself become language. The article is published online by Philosophical Transactions of the Royal Society B and will appear in the Sept. 19 print issue of the journal, which is a theme issue on “Language as a Multimodal Phenomenon.”

“Children who can hear use gesture along with speech to communicate as they acquire spoken language, “Goldin-Meadow said. “Those gesture-plus-word combinations precede and predict the acquisition of word combinations that convey the same notions. The findings make it clear that children have an understanding of these notions before they are able to express them in speech.”

In addition to children who learned spoken languages, Goldin-Meadow studied children who learned sign language from their parents. She found that they too use gestures as they use American Sign Language. These gestures predict learning, just like the gestures that accompany speech.

Finally, Goldin-Meadow looked at deaf children whose hearing losses prevented them from learning spoken language, and whose hearing parents had not presented them with conventional sign language. These children use homemade gesture systems, called homesign, to communicate. Homesign shares properties in common with natural languages but is not a full-blown language, perhaps because the children lack “a community of communication partners,” Goldin-Meadow writes. Nevertheless, homesign can be the “first step toward an established sign language.” In Nicaragua, individual gesture systems blossomed into a more complex, shared system when homesigners were brought together for the first time.

These findings provide insight into gesture’s contribution to learning. Gesture plays a role in learning for signers even though it is in the same modality as sign. As a result, gesture cannot aid learners simply by providing a second modality. Rather, gesture adds imagery to the categorical distinctions that form the core of both spoken and sign languages.

Goldin-Meadow concludes that gesture can be the basis for a self-made language, assuming linguistic forms and functions when other vehicles are not available. But when a conventional spoken or sign language is present, gesture works along with language, helping to promote learning.

(Source: news.uchicago.edu)

Filed under gestures language acquisition learning communication homesign neuroscience science

291 notes

Brain imaging study examines second-language learning skills

With enough practice, some learners of a second language can process their new language as well as native speakers, research at the University of Kansas shows.

image

(Credit: bigstockphoto)

Using brain imaging, a trio of KU researchers was able to examine to the millisecond how the brain processes a second language. They then compared their findings with their previous results for native speakers and saw both followed similar patterns.

The research by Robert Fiorentino and Alison Gabriele, both associate professors in the linguistics department, and José Alemán Bañón, a former KU graduate student who is now a postdoctoral researcher at the University of Reading in the United Kingdom, was published this month in the journal Second Language Research.

For years, linguists have debated whether second-language learners would ever resemble native speakers in their ability to process language properties that differ between the first and second language, such as gender agreement, which is a property of Spanish but not English. In Spanish, all nouns are categorized as masculine or feminine, and various elements in the sentence, such as adjectives, need to carry the gender feature of the noun as well.

Some researchers argued that even those who spoke a second language with a high level of accuracy were using a qualitatively different mechanism than native speakers.

“We realized that these different theories proposing that either second-language learners use the same mechanism, or a different mechanism could actually be teased apart by using brain-imaging techniques,” Gabriele said.

The team studied 26 high-level Spanish speakers who hadn’t learned to speak Spanish until after age 11 and grew up with English as the majority language. The speakers used Spanish on a daily basis and had spent an average of a year and a half in a Spanish-speaking country.

They were compared with 24 native speakers, who were raised in Spanish-speaking countries and stayed in their home country until age 17.

To measure language processing as it happens, the team used a method known as electroencephalography (EEG), which uses an array of electrodes placed on the scalp to detect patterns of brain activity with high accuracy in timing.

Once hooked up to the EEG, the test subjects were asked to read sentences, some of which had grammatical errors in either number agreement or gender agreement.

The researchers then compared the results of the second-language learners to native speakers. They found that the highly proficient second-language speakers showed the same patterns of brain activity as native speakers when processing grammatical violations in sentences.

“We show that the learners’ brain activity looks qualitatively similar to that of native speakers, suggesting that they are using the same mechanisms,” Fiorentino said.

The study highlights the brain’s plasticity and its ability to acquire a new complex system even in adulthood.

“A lot of researchers have argued that there is some sort of language learning mechanism that might atrophy over the life span, particularly before puberty. And, we certainly have a lot of evidence that it is difficult to process your second language at nativelike levels and you have to go through quite a bit of effort to find people who can,” Gabriele said. “But I think what this paper shows is that it is possible.”

Gabriele and Fiorentino are working on a second phase of the research, studying how the brain processes a second language at the initial stages of exposure. Their preliminary results suggest that properties that are shared between the first and second language show patterns of brain activity that are very similar in learners and native speakers. This suggests that learners build on the representation for language that is already in place when learning a second language.

(Source: news.ku.edu)

Filed under language language acquisition brain imaging EEG brain activity neuroscience science

131 notes

Twin study suggests language delay due more to nature than nurture

A study of 473 sets of twins followed since birth found that compared with single-born children, 47 percent of 24-month-old identical twins had language delay compared with 31 percent of nonidentical twins. Overall, twins had twice the rate of late language emergence of single-born children. None of the children had disabilities affecting language acquisition.

image

The results of the study were published in the June 2014 Journal of Speech, Language, and Hearing Research.

University of Kansas Distinguished Professor Mabel Rice, lead author, said that all of the language traits analyzed in the study—vocabulary, combining words and grammar—were significantly heritable with genes accounting for about 43 percent of the overall twins’ deficit.

The “twinning effect” — a lower level of language performance for twins than single-born children — was expected to be comparable for both kinds of twins, but was greater for identical twins, said Rice, strengthening the case for the heritability of language development.

“This finding disputes hypotheses that attribute delays in early language acquisition of twins to mothers whose attention is reduced due to the demands of caring for two toddlers,” Rice said. “This should reassure busy parents who worry about giving sufficient individual attention to each child.”

However, said Rice, prematurity and birth complications, more common in identical twins, could also affect their higher rates of language delay. A study of pregnancy and birth risks for late talking in twins is currently under way by the study authors.

Further, the study will continue at least until 2017 to continue to follow the twins through preschool and school years up to adolescence to answer the question of whether late-talking twins do catch up to their peers.

“Twin studies provide unique opportunities to study inherited and environmental contributions to language acquisition,” Rice said. “The outcomes inform our understanding of how these influences contribute to language acquisition in single-born children as well.”

Late language emergence means that a child’s language is below age and gender expectations in the number of words they speak and combining two or more words into sentences. In this study, 71 percent of 2-year-old twins were not combining words compared with 17 percent of single-born children.

While previous behavioral genetics studies of toddlers have largely focused on vocabulary, the researchers introduced an innovative measure of early grammatical ability on the correct use of the past tense and the “to be” and “to do” verbs. The measure was inspired by the Rice/Wexler Test of Early Grammar Impairment, developed in 2001 by Rice and Kenneth Wexler, Massachusetts Institute of Technology professor. It was the first test to detect the subtle but common language disorder, specific language impairment.

Rice’s collaborators in the international longitudinal project that began in 2002 are Professors Cate Taylor and Stephen Zubrick from the Telethon Kids Institute in Perth, Western Australia, and Professor Shelley Smith at the University of Nebraska Medical Center.

The study population is located in the vicinity of Perth, Western Australia, because it is demographically identical to Kansas City and several other U.S. Midwestern states. But in Australian health records are available, and the Western Australia Twin Registry is a unique resource for researchers since it is a record of all multiple births, Rice said.

The research group has followed the development of 1,000 sets of Western Australian twins from their first words. In 2012, the group was granted $2.8 million by the National Institute for Deafness and Other Communication Disorders for a fourth five-year-cycle that will enable researchers to continue to monitor the twins as they develop through adolescence. In addition to formal language tests, researchers have collected genetic and environmental data as well as assessments with the twins’ siblings.

(Source: news.ku.edu)

Filed under language language acquisition heritability twinning effect neuroscience science

328 notes

Recent study sheds new light on second language learning in adulthood

A recent study shows that assimilation of L2 vowels to L1 phonemes governs language learning in adulthood; researchers urge development of novel methods of second language teaching.

image

The behavioral and neural evidence of the study was found by researchers at Aalto University in Finland and at the University of Salento in Italy. The study was the first one to identify the neural mechanisms underlying the learning of L2 sounds (second language) in adulthood. Overall, this and earlier studies support the hypothesis that students in a foreign language classroom should particularly benefit from learning environments where they receive a focused amount of high-quality input from L2 native teachers, use pervasively the L2 to achieve functional and communicative goals, and receive intensive training (including the use of multi-medial systems) in the perception and production of L2 sounds in order to reactivate neuroplasticity of auditory cortex.

Learning in adulthood the sounds of a second language L2 means assimilating them to the phonemes of the native language L1.

In the study, two samples of Italian students, attending first year and fifth year classes of an English Language curriculum were invited to the behavioral and electroencephalography (EEG) lab. Dr. Brattico, senior author of the study from Aalto University, explains: “The discrimination skills were measured by crossing two methodologies: on one hand, perception tests in which the students listened to couples of English sounds that I synthesized and had to judge how similar or different they were, and on the other hand, EEG recordings with 64 electrode cap, while the students were presented with the same pairs of sounds and watched a silenced movie.”

The EEG recordings were used to extract the auditory event-related potential, namely the succession of neural events necessary to the processing and representation of sound, originating from the auditory cortex.
“When we hear linguistic sounds that are part of our native tongue, in a few milliseconds the brain is able to decipher the acoustic signal, extract the peculiar characteristics of each sound and produce a mental representation of it: thus we are able to discern one sound from another and assemble first the syllables, then the words and so on”, adds the first author, Professor Grimaldi, University of Salento.

“We compared the neural responses of the auditory cortex of the two groups of university students with one another and with a control group with a low level of education (third year of junior secondary school)”, explains Grimaldi. “We started with this hypothesis: if during the academic studies the students had developed new perceptual abilities we would have found different neural responses for the three groups”. The results did not confirm the hypothesis, but instead showed that neutrally, the L2 sounds were assimilated to L1 phonemes in all the groups.

Grimaldi adds: “Let us consider, for example, what happens when we watch a movie or listen to a song in a language that we do not know: we are able to perceive acoustic differences, but we cannot `extract´ the words from the acoustic stream and accede to their meaning. This is what happened for our groups of students”. Previous behavioral studies that observed L2 learners who had different native languages in an educational context (German, Finnish, Japanese, Turkish and other English learning students) never produced results favorable for the teachers. “This study specifies confirms and extends such results, proving by means of neurophysiological data that the quantity and quality of the stimuli received by university students are not enough to form long-term traces of L2 sounds in the auditory cortex”, confirms Brattico.

The results were published online in Frontiers in Human Neuroscience.

(Source: web.aalto.fi)

Filed under auditory cortex language acquisition second language learning vowel perception neuroscience science

240 notes

Study provides new insight into how toddlers learn verbs
Parents can help toddlers’ language skills by showing them a variety of examples of different actions, according to new research from the University of Liverpool.
Previous research has shown that verbs pose particular difficulties to toddlers as they refer to actions rather than objects, and actions are often different each time a child sees them.
To find out more about this area of child language, University psychologists asked a group of toddlers to watch one of two short videos.
They then examined whether watching a cartoon star repeat the same action, compared to a character performing three different actions, affected the children’s understanding of verbs.
Developmental psychologist, Dr Katherine Twomey, said: “Knowledge of how children start to learn language is important to our understanding of how they progress throughout preschool and school years.
“This is the first study to indicate that showing toddlers similar but, importantly, not identical actions actually helped them understand what a verb refers to, instead of confusing them as you might expect.”
Dr Jessica Horst from the University of Sussex who collaborated on the research added: “It is a crucial first step in understanding how what children see affects how they learn verbs and action categories, and provides the groundwork for future studies to examine in more detail exactly what kinds of variability affect how children learn words.”

Study provides new insight into how toddlers learn verbs

Parents can help toddlers’ language skills by showing them a variety of examples of different actions, according to new research from the University of Liverpool.

Previous research has shown that verbs pose particular difficulties to toddlers as they refer to actions rather than objects, and actions are often different each time a child sees them.

To find out more about this area of child language, University psychologists asked a group of toddlers to watch one of two short videos.

They then examined whether watching a cartoon star repeat the same action, compared to a character performing three different actions, affected the children’s understanding of verbs.

Developmental psychologist, Dr Katherine Twomey, said: “Knowledge of how children start to learn language is important to our understanding of how they progress throughout preschool and school years.

“This is the first study to indicate that showing toddlers similar but, importantly, not identical actions actually helped them understand what a verb refers to, instead of confusing them as you might expect.”

Dr Jessica Horst from the University of Sussex who collaborated on the research added: “It is a crucial first step in understanding how what children see affects how they learn verbs and action categories, and provides the groundwork for future studies to examine in more detail exactly what kinds of variability affect how children learn words.”

Filed under language language acquisition child development verb learning psychology neuroscience science

664 notes

Language Structure… You’re Born with It
Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.
THE STUDY
While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.
BLA, ShBA, LBA
Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?
THE EXPERIMENT
These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).
Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.
It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Language Structure… You’re Born with It

Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.

THE STUDY

While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.

BLA, ShBA, LBA

Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?

THE EXPERIMENT

These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).

Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.

It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Filed under language language acquisition speech perception phonology linguistics neuroscience science

160 notes

Some innate preferences shape the sound of words from birth
Languages are learned, it’s true, but are there also innate bases in the structure of language that precede experience? Linguists have noticed that, despite the huge variability of human languages, here are some preferences in the sound of words that can be found across languages. So they wonder whether this reflects the existence of a universal, innate biological basis of language. A SISSA study provides evidence to support this hypothesis, demonstrating that certain preferences in the sound of words are already active in newborn infants.
Take the sound “bl”: how many words starting with that sound can you think of? Blouse, blue, bland… Now try with “lb”: how many can you find? None in English and Italian, and even in other languages such words either don’t exist or are extremely rare. Human languages offer several examples of this kind, and this indicates that in forming words we tend to prefer certain sound combinations to others, irrespective of which language we speak. The fact that this occurs across languages has prompted linguists to hypothesize the existence of biological bases of language (in born and universal) which precede language learning in humans. Finding evidence to support his hypothesis is, however, far from easy and the debate between the proponents of this view and those who believe that language is merely the result of learning is still open. But proof supporting the “universalist” hypothesis has now been provided by a new study conducted by a research team of the International School for Advanced Studies (SISSA) in Trieste and just published in the journal PNAS.
David Gomez, a SISSA research scientist working under the supervision of Jacques Mehler and first author of the paper, and his coworkers decided to observe the brain activity of newborns. “In fact, if it is possible to demonstrate that these preferences are already present within days from birth, when the newborn baby is still unable to speak and presumably has very limited language knowledge, then we can infer that there is an inborn bias that prefers certain words to others”, comments Gomez.
“To monitor the newborns’ brain activity we used a non-invasive technique, i.e., functional near-infrared spectroscopy”, explains Marina Nespor, a SISSA neuroscientist who participated in the study. During the experiments the newborns would listen to words starting with normally “preferred” sounds (like “bl”) and others with  uncommon sounds (“lb”). “What we found was that the newborns’ brains reacted in a significantly different manner to the two types of sound” continues Nespor.
“The brain regions that are activated while the newborns are listening react differently in the two cases”, comments Gomez, “and reflect the preferences observed across languages, as well as the behavioural responses recorded in similar experiments carried out in adults”. “It’s difficult to imagine what languages would sound like if humans didn’t share a common knowledge base”, concludes Gomez. “We are lucky that this common base exists. This way, our children are born with an ability to distinguish words from “non-words” ever since birth, regardless of which language they will then go on to learn”.

Some innate preferences shape the sound of words from birth

Languages are learned, it’s true, but are there also innate bases in the structure of language that precede experience? Linguists have noticed that, despite the huge variability of human languages, here are some preferences in the sound of words that can be found across languages. So they wonder whether this reflects the existence of a universal, innate biological basis of language. A SISSA study provides evidence to support this hypothesis, demonstrating that certain preferences in the sound of words are already active in newborn infants.

Take the sound “bl”: how many words starting with that sound can you think of? Blouse, blue, bland… Now try with “lb”: how many can you find? None in English and Italian, and even in other languages such words either don’t exist or are extremely rare. Human languages offer several examples of this kind, and this indicates that in forming words we tend to prefer certain sound combinations to others, irrespective of which language we speak. The fact that this occurs across languages has prompted linguists to hypothesize the existence of biological bases of language (in born and universal) which precede language learning in humans. Finding evidence to support his hypothesis is, however, far from easy and the debate between the proponents of this view and those who believe that language is merely the result of learning is still open. But proof supporting the “universalist” hypothesis has now been provided by a new study conducted by a research team of the International School for Advanced Studies (SISSA) in Trieste and just published in the journal PNAS.

David Gomez, a SISSA research scientist working under the supervision of Jacques Mehler and first author of the paper, and his coworkers decided to observe the brain activity of newborns. “In fact, if it is possible to demonstrate that these preferences are already present within days from birth, when the newborn baby is still unable to speak and presumably has very limited language knowledge, then we can infer that there is an inborn bias that prefers certain words to others”, comments Gomez.

“To monitor the newborns’ brain activity we used a non-invasive technique, i.e., functional near-infrared spectroscopy”, explains Marina Nespor, a SISSA neuroscientist who participated in the study. During the experiments the newborns would listen to words starting with normally “preferred” sounds (like “bl”) and others with  uncommon sounds (“lb”). “What we found was that the newborns’ brains reacted in a significantly different manner to the two types of sound” continues Nespor.

“The brain regions that are activated while the newborns are listening react differently in the two cases”, comments Gomez, “and reflect the preferences observed across languages, as well as the behavioural responses recorded in similar experiments carried out in adults”. “It’s difficult to imagine what languages would sound like if humans didn’t share a common knowledge base”, concludes Gomez. “We are lucky that this common base exists. This way, our children are born with an ability to distinguish words from “non-words” ever since birth, regardless of which language they will then go on to learn”.

Filed under language language acquisition speech perception brain activity psychology neuroscience science

147 notes

Brain research provides insight into language learning

Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

image

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.  

The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”

“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”

Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.

“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.

“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.   

“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”

While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said. 

“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.

Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.

(Source: asunews.asu.edu)

Filed under mirror neuron system language acquisition language learning plasticity neuroscience science

free counters