Neuroscience

Articles and news from the latest research reports.

Posts tagged language processing

244 notes

Researchers map brain areas vital to understanding language
When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.
The study appears in Brain: A Journal of Neurology.
With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.
To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.
“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.
A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.
The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.
“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.
“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.
“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Researchers map brain areas vital to understanding language

When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.

The study appears in Brain: A Journal of Neurology.

With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.

To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.

“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.

A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.

The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.

“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.

“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.

“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Filed under discourse comprehension cerebral cortex language language processing neuroimaging neuroscience science

174 notes

Size matters: brain processes ‘big’ words faster than ‘small’ words
Bigger may not always be better, but when it comes to brain processing speed, it appears that size does matter.
A new study has revealed that words which refer to big things are processed more quickly by the brain than words for small things.
Researchers at the University of Glasgow had previously found that big concrete words – ‘ocean’, ‘dinosaur’, ‘cathedral’ – were read more quickly than small ones such as ‘apple’, ‘parasite’ and ‘cigarette’.
Now they have discovered that abstract words which are thought of as big – ‘greed’, ‘genius’, ‘paradise’ – are also processed faster than concepts considered to be small such as ‘haste’, ‘polite’ and ‘intimate’.
Dr Sara Sereno, a Reader in the Institute of Neuroscience and Psychology who led the study said: “It seems that size matters, even when it’s abstract and you can’t see it.”
The study, published in the online journal PLoS ONE, also involved researchers from Kent, Manchester and Oregon. Participants were presented with a series of real words referring to objects and concepts both big and small, as well as nonsense, made-up words, totalling nearly 500 items. The different word types were matched for length and frequency of use.
The 60 participants were asked to press one of two buttons to indicate whether each item was a real word or not. This decision took just over 500 milliseconds or around a half second per item. Results showed that words referring to larger objects or concepts were processed around 20 milliseconds faster than words referring to smaller objects or concepts.
“This might seem like a very short period of time,” said Dr Sereno, “but it’s significant and the effect size is typical for this task.”
Lead author Dr Bo Yao said: “It turned out that our big concrete and abstract words, like ‘shark’ and ‘panic’, tended to be more emotionally arousing than our small concrete and abstract words, like ‘acorn’ and ‘tight’. Our analysis showed that these emotional links played a greater role in the identification of abstract compared to concrete words.”
“Even though abstract words don’t refer to physical objects in the real world, we found that it’s actually quite easy to think of certain concepts in terms of their size,” said co-author Prof Paddy O’Donnell. “Everyone thinks that ‘devotion’ is something big and that ‘mischief’ is something small.”
Bigger things it seems, whether real or imagined, grab our attention more easily and our brains process them faster – even when they are represented by written words.
(Image credit)

Size matters: brain processes ‘big’ words faster than ‘small’ words

Bigger may not always be better, but when it comes to brain processing speed, it appears that size does matter.

A new study has revealed that words which refer to big things are processed more quickly by the brain than words for small things.

Researchers at the University of Glasgow had previously found that big concrete words – ‘ocean’, ‘dinosaur’, ‘cathedral’ – were read more quickly than small ones such as ‘apple’, ‘parasite’ and ‘cigarette’.

Now they have discovered that abstract words which are thought of as big – ‘greed’, ‘genius’, ‘paradise’ – are also processed faster than concepts considered to be small such as ‘haste’, ‘polite’ and ‘intimate’.

Dr Sara Sereno, a Reader in the Institute of Neuroscience and Psychology who led the study said: “It seems that size matters, even when it’s abstract and you can’t see it.”

The study, published in the online journal PLoS ONE, also involved researchers from Kent, Manchester and Oregon. Participants were presented with a series of real words referring to objects and concepts both big and small, as well as nonsense, made-up words, totalling nearly 500 items. The different word types were matched for length and frequency of use.

The 60 participants were asked to press one of two buttons to indicate whether each item was a real word or not. This decision took just over 500 milliseconds or around a half second per item. Results showed that words referring to larger objects or concepts were processed around 20 milliseconds faster than words referring to smaller objects or concepts.

“This might seem like a very short period of time,” said Dr Sereno, “but it’s significant and the effect size is typical for this task.”

Lead author Dr Bo Yao said: “It turned out that our big concrete and abstract words, like ‘shark’ and ‘panic’, tended to be more emotionally arousing than our small concrete and abstract words, like ‘acorn’ and ‘tight’. Our analysis showed that these emotional links played a greater role in the identification of abstract compared to concrete words.”

“Even though abstract words don’t refer to physical objects in the real world, we found that it’s actually quite easy to think of certain concepts in terms of their size,” said co-author Prof Paddy O’Donnell. “Everyone thinks that ‘devotion’ is something big and that ‘mischief’ is something small.”

Bigger things it seems, whether real or imagined, grab our attention more easily and our brains process them faster – even when they are represented by written words.

(Image credit)

Filed under language learning language processing neuroscience science

171 notes

Ability To Move To A Beat Linked To Brain’s Response To Speech

Study suggests musical training could possibly sharpen language processing

image

People who are better able to move to a beat show more consistent brain responses to speech than those with less rhythm, according to a study published in the September 18 issue of The Journal of Neuroscience. The findings suggest that musical training could possibly sharpen the brain’s response to language. 

Scientists have long known that moving to a steady beat requires synchronization between the parts of the brain responsible for hearing and movement. In the current study, Professor Nina Kraus, PhD, and colleagues at Northwestern University examined the relationship between the ability to keep a beat and the brain’s response to sound.

More than 100 teenagers from the Chicago area participated in the Kraus Lab study, where they were instructed to listen and tap their finger along to a metronome. The teens’ tapping accuracy was computed based on how closely their taps aligned in time with the “tic-toc” of the metronome. In a second test, the researchers used a technique called electroencephalography (EEG) to record brainwaves from a major brain hub for sound processing as the teens listened to the synthesized speech sound “da” repeated periodically over a 30-minute period. The researchers then calculated how similarly the nerve cells in this region responded each time the “da” sound was repeated.

“Across this population of adolescents, the more accurate they were at tapping along to the beat, the more consistent their brains’ response to the ‘da’ syllable was,” Kraus said. Because previous studies show a link between reading ability and beat-keeping ability as well as reading ability and the consistency of the brain’s response to sound, Kraus explained that these new findings show that hearing is a common basis for these associations. 

“Rhythm is inherently a part of music and language,” Kraus said. “It may be that musical training, with an emphasis on rhythmic skills, exercises the auditory-system, leading to strong sound-to-meaning associations that are so essential in learning to read.”

John Iversen, PhD, who studies how the brain processes music at the University of California, San Diego, and was not involved with this study, noted that the findings raise the possibility that musical training may have important impacts on the brain.“This study adds another piece to the puzzle in the emerging story suggesting that musical rhythmic abilities are correlated with improved performance in non-music areas, particularly language,” he said.

Kraus’ group is now working on a multi-year study to evaluate the effects of musical training on beat synchronization, response consistency, and reading skills in a group of children engaging in musical training.

(Source: alphagalileo.org)

Filed under language processing musical training auditory system neuroscience psychology science

679 notes

Look at What I’M Saying
University of Utah Engineers Show Brain Depends on Vision to Hear 
University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.
“For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality – and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”
The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.
In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.
Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.
These four test subjects were then asked to watch and listen to videos focused on a person’s mouth as they said the syllables “ba,” “va,” “ga” and “tha.” Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:
— The motion of the mouth matched the sound. For example, the video showed “ba” and the audio sound also was “ba,” so the patients saw and heard “ba.”
— The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed “ga” but the audio was “tha,” so the patients perceived this disconnect and correctly heard “tha.”
— The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed “ba” but the audio was “va,” and patients heard “ba” even though the sound really was “va.” This demonstrates the McGurk effect – vision overriding hearing.
By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn’t match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.
“We’ve shown neural signals in the brain that should be driven by sound are being overridden by visual cues that say, ‘Hear this!’” says Greger. “Your brain is essentially ignoring the physics of sound in the ear and following what’s happening through your vision.”
Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.
The new findings could help researchers understand what drives language processing in humans, especially in a developing infant brain trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

Look at What I’M Saying

University of Utah Engineers Show Brain Depends on Vision to Hear

University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.

“For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality – and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”

The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.

In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.

Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.

These four test subjects were then asked to watch and listen to videos focused on a person’s mouth as they said the syllables “ba,” “va,” “ga” and “tha.” Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:

— The motion of the mouth matched the sound. For example, the video showed “ba” and the audio sound also was “ba,” so the patients saw and heard “ba.”

— The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed “ga” but the audio was “tha,” so the patients perceived this disconnect and correctly heard “tha.”

— The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed “ba” but the audio was “va,” and patients heard “ba” even though the sound really was “va.” This demonstrates the McGurk effect – vision overriding hearing.

By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn’t match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.

“We’ve shown neural signals in the brain that should be driven by sound are being overridden by visual cues that say, ‘Hear this!’” says Greger. “Your brain is essentially ignoring the physics of sound in the ear and following what’s happening through your vision.”

Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.

The new findings could help researchers understand what drives language processing in humans, especially in a developing infant brain trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

Filed under McGurk effect auditory cortex language language processing neuroscience science

82 notes

Brain scans may help diagnose dyslexia
Differences in a key language structure can be seen even before children start learning to read.
About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.
The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.
Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.
“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.
The path to reading
The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.
“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.
The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).
This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).
When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.
Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.
The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.
Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.
“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.
Early intervention
The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.
Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.
The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.
“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.
For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Brain scans may help diagnose dyslexia

Differences in a key language structure can be seen even before children start learning to read.

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.

“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Filed under dyslexia language processing arcuate fasciculus neuroimaging neuroscience science

790 notes

Trying to Learn a Foreign Language? Avoid Reminders of Home
Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. “I meant to say a transition word like ‘however,’ but used the Chinese version instead,” she says. “It really shocked me.”
Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one’s homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country.
Previous studies have shown that cultural icons such as landmarks and celebrities act like “magnets of meaning,” instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture.
To determine whether these cultural icons can also interfere with speaking a second language, Zhang, Morris, and their colleagues recruited male and female Chinese students who had lived in the United States for a less than a year and had them sit opposite a computer monitor that displayed the face of either a Chinese or Caucasian male called “Michael Lee.” As microphones recorded their speech, the volunteers conversed with Lee, who spoke to them in English with an American accent about campus life.
Next, the team compared the fluency of the volunteers’ speech when they were talking to a Chinese versus a Caucasian face. Although participants reported a more positive experience chatting with the Chinese version of “Michael Lee,” they were significantly less fluent, producing 11% fewer words per minute on average, the authors report online today in the Proceedings of the National Academy of Sciences. “It’s ironic” that the more comfortable volunteers were with their conversational partner, the less fluent they became, Zhang says. “That’s something we did not expect.”
To rule out the possibility that the volunteers were speaking more fluently to the Caucasian face on purpose, thus explaining the performance gap, Zhang and colleagues asked the participants to invent a story, such as a boy swimming in the ocean, while simultaneously being exposed to Chinese and American icons rather than faces. Seeing Chinese icons such as the Great Wall also interfered with the volunteers’ English fluency, causing a 16% drop in words produced per minute. The icons also made the volunteers 85% more likely to use a literal translation of the Chinese word for an object rather than the English term, Zhang says. Rather than saying “pistachio,” for example, volunteers used the Chinese version, “happy nuts.”
Understanding how these subtle cultural cues affect language fluency could help employers design better job interviews, Morris says. For example, taking a Japanese job candidate out for sushi, although a well-meaning gesture, might not be the best way to help them shine.
"It’s quite striking that these effects were so robust," says Mary Helen Immordino-Yang, a developmental psychologist at the University of Southern California in Los Angeles. They show that "we’re exquisitely attuned to cultural context," she says, and that "even subtle cues like the ethnicity of the person we’re talking to" can affect language processing. The take-home message? "If one wants to acculturate rapidly, don’t move to an ethnic enclave neighborhood where you’ll be surrounded by people like yourself," Morris says. Sometimes, a familiar face is the last thing you need to see.

Trying to Learn a Foreign Language? Avoid Reminders of Home

Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. “I meant to say a transition word like ‘however,’ but used the Chinese version instead,” she says. “It really shocked me.”

Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one’s homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country.

Previous studies have shown that cultural icons such as landmarks and celebrities act like “magnets of meaning,” instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture.

To determine whether these cultural icons can also interfere with speaking a second language, Zhang, Morris, and their colleagues recruited male and female Chinese students who had lived in the United States for a less than a year and had them sit opposite a computer monitor that displayed the face of either a Chinese or Caucasian male called “Michael Lee.” As microphones recorded their speech, the volunteers conversed with Lee, who spoke to them in English with an American accent about campus life.

Next, the team compared the fluency of the volunteers’ speech when they were talking to a Chinese versus a Caucasian face. Although participants reported a more positive experience chatting with the Chinese version of “Michael Lee,” they were significantly less fluent, producing 11% fewer words per minute on average, the authors report online today in the Proceedings of the National Academy of Sciences. “It’s ironic” that the more comfortable volunteers were with their conversational partner, the less fluent they became, Zhang says. “That’s something we did not expect.”

To rule out the possibility that the volunteers were speaking more fluently to the Caucasian face on purpose, thus explaining the performance gap, Zhang and colleagues asked the participants to invent a story, such as a boy swimming in the ocean, while simultaneously being exposed to Chinese and American icons rather than faces. Seeing Chinese icons such as the Great Wall also interfered with the volunteers’ English fluency, causing a 16% drop in words produced per minute. The icons also made the volunteers 85% more likely to use a literal translation of the Chinese word for an object rather than the English term, Zhang says. Rather than saying “pistachio,” for example, volunteers used the Chinese version, “happy nuts.”

Understanding how these subtle cultural cues affect language fluency could help employers design better job interviews, Morris says. For example, taking a Japanese job candidate out for sushi, although a well-meaning gesture, might not be the best way to help them shine.

"It’s quite striking that these effects were so robust," says Mary Helen Immordino-Yang, a developmental psychologist at the University of Southern California in Los Angeles. They show that "we’re exquisitely attuned to cultural context," she says, and that "even subtle cues like the ethnicity of the person we’re talking to" can affect language processing. The take-home message? "If one wants to acculturate rapidly, don’t move to an ethnic enclave neighborhood where you’ll be surrounded by people like yourself," Morris says. Sometimes, a familiar face is the last thing you need to see.

Filed under cross-language interference language processing cultural cues psychology neuroscience science

165 notes

Decoding ‘noisy’ language in daily life
Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”
A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.
“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.
Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.
“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”
Asymmetrical strategies
The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.
A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.
“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.
Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.
This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.
“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.
Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.
Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.
“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.
Reverse-engineering the message
Other scholars say the work helps illuminate the strategies people may use when they interpret language.
“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.
According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.
“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Decoding ‘noisy’ language in daily life

Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”

A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.

“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.

Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.

“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”

Asymmetrical strategies

The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.

A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.

“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.

Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.

This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.

“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.

Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.

Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.

“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.

Reverse-engineering the message

Other scholars say the work helps illuminate the strategies people may use when they interpret language.

“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.

According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.

“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Filed under language speech speech perception language processing linguistics psychology neuroscience science

203 notes

Study shows human brain able to discriminate syllables three months prior to birth
A team of French researchers has discovered that the human brain is capable of distinguishing between different types of syllables as early as three months prior to full term birth. As they describe in their paper published in the Proceedings of the National Academy of Sciences, the team found via brain scans that babies born up to three months premature are capable of some language processing.
Many studies have been conducted on full term babies to try to understand the degree of mental capabilities at birth. Results from such studies have shown that babies are able to distinguish their mother’s voice from others, for example, and can even recognize the elements of short stories. Still puzzling however, is whether some of what newborns are able to demonstrate is innate, or learned immediately after birth. To learn more, the researchers enlisted the assistance of several parents of premature babies and their offspring. Babies born as early as 28 weeks (full term is 37 weeks) had their brains scanned using bedside functional optical imaging, while sounds (soft voices) were played for them.
Three months prior to full term, the team notes, neurons in the brain are still migrating to what will be their final destination locations and initial connections between the upper brain regions are still forming—also neural linkages between the ears and brain are still being created. All of this indicates a brain that is still very much in flux and in the process of becoming the phenomenally complicated mass that humans are known for, which would seem to suggest that very limited if any communication skills would have developed.
The researchers found, however, that even at a time when the brain hasn’t fully developed, the premature infants were able to tell the difference between female versus male voices, and to distinguish between the syllables “ba” and “ga”. They noted also that the same parts of the brain were used by the infants to process sounds as adults. This, the researchers conclude, shows that linguistic connections in the brain develop before birth and because of that do not need to be acquired afterwards, suggesting that at least some abilities are innate.

Study shows human brain able to discriminate syllables three months prior to birth

A team of French researchers has discovered that the human brain is capable of distinguishing between different types of syllables as early as three months prior to full term birth. As they describe in their paper published in the Proceedings of the National Academy of Sciences, the team found via brain scans that babies born up to three months premature are capable of some language processing.

Many studies have been conducted on full term babies to try to understand the degree of mental capabilities at birth. Results from such studies have shown that babies are able to distinguish their mother’s voice from others, for example, and can even recognize the elements of short stories. Still puzzling however, is whether some of what newborns are able to demonstrate is innate, or learned immediately after birth. To learn more, the researchers enlisted the assistance of several parents of premature babies and their offspring. Babies born as early as 28 weeks (full term is 37 weeks) had their brains scanned using bedside functional optical imaging, while sounds (soft voices) were played for them.

Three months prior to full term, the team notes, neurons in the brain are still migrating to what will be their final destination locations and initial connections between the upper brain regions are still forming—also neural linkages between the ears and brain are still being created. All of this indicates a brain that is still very much in flux and in the process of becoming the phenomenally complicated mass that humans are known for, which would seem to suggest that very limited if any communication skills would have developed.

The researchers found, however, that even at a time when the brain hasn’t fully developed, the premature infants were able to tell the difference between female versus male voices, and to distinguish between the syllables “ba” and “ga”. They noted also that the same parts of the brain were used by the infants to process sounds as adults. This, the researchers conclude, shows that linguistic connections in the brain develop before birth and because of that do not need to be acquired afterwards, suggesting that at least some abilities are innate.

Filed under infants premature babies language language processing brain neuroscience psychology science

68 notes

Children with auditory processing disorder may now have more treatment options
Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.
Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.
Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.
Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.
"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."
Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.
"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."
Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.
Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:
Phonemic training to address the brain’s ability to process speech sounds.
Words in Noise training to address the brain’s ability to process speech with background noise.
Phonemic synthesis training to address the brain’s ability to process speech sounds across words.
At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.
"Based on these results, our program is showing early signs of being effective," Burnett said.

Children with auditory processing disorder may now have more treatment options

Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.

Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.

Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.

Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.

"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."

Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.

"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."

Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.

Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:

  • Phonemic training to address the brain’s ability to process speech sounds.
  • Words in Noise training to address the brain’s ability to process speech with background noise.
  • Phonemic synthesis training to address the brain’s ability to process speech sounds across words.

At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.

"Based on these results, our program is showing early signs of being effective," Burnett said.

Filed under auditory processing disorder EARSS program hearing language processing neuroscience science

53 notes

“Simplified” brain lets the iCub robot learn language 
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.

“Simplified” brain lets the iCub robot learn language

The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.

Filed under robots robotics humanoids iCub language language processing neural networks ANN neuroscience science

free counters