Neuroscience

Articles and news from the latest research reports.

Posts tagged language

679 notes

Look at What I’M Saying
University of Utah Engineers Show Brain Depends on Vision to Hear 
University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.
“For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality – and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”
The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.
In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.
Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.
These four test subjects were then asked to watch and listen to videos focused on a person’s mouth as they said the syllables “ba,” “va,” “ga” and “tha.” Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:
— The motion of the mouth matched the sound. For example, the video showed “ba” and the audio sound also was “ba,” so the patients saw and heard “ba.”
— The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed “ga” but the audio was “tha,” so the patients perceived this disconnect and correctly heard “tha.”
— The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed “ba” but the audio was “va,” and patients heard “ba” even though the sound really was “va.” This demonstrates the McGurk effect – vision overriding hearing.
By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn’t match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.
“We’ve shown neural signals in the brain that should be driven by sound are being overridden by visual cues that say, ‘Hear this!’” says Greger. “Your brain is essentially ignoring the physics of sound in the ear and following what’s happening through your vision.”
Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.
The new findings could help researchers understand what drives language processing in humans, especially in a developing infant brain trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

Look at What I’M Saying

University of Utah Engineers Show Brain Depends on Vision to Hear

University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.

“For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality – and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”

The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.

In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.

Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.

These four test subjects were then asked to watch and listen to videos focused on a person’s mouth as they said the syllables “ba,” “va,” “ga” and “tha.” Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:

— The motion of the mouth matched the sound. For example, the video showed “ba” and the audio sound also was “ba,” so the patients saw and heard “ba.”

— The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed “ga” but the audio was “tha,” so the patients perceived this disconnect and correctly heard “tha.”

— The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed “ba” but the audio was “va,” and patients heard “ba” even though the sound really was “va.” This demonstrates the McGurk effect – vision overriding hearing.

By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn’t match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.

“We’ve shown neural signals in the brain that should be driven by sound are being overridden by visual cues that say, ‘Hear this!’” says Greger. “Your brain is essentially ignoring the physics of sound in the ear and following what’s happening through your vision.”

Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.

The new findings could help researchers understand what drives language processing in humans, especially in a developing infant brain trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

Filed under McGurk effect auditory cortex language language processing neuroscience science

131 notes

Primate calls, like human speech, can help infants form categories
Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.
Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.
Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.
"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.
In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.
Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.
Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”
"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.
Waxman said these new results open the door to new research questions.
"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"
(Image: Corbis)

Primate calls, like human speech, can help infants form categories

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"

(Image: Corbis)

Filed under primates vocalizations language categorization psychology neuroscience science

364 notes

Fetus in womb learns language cues before birth, study finds 
Watch your mouth around your unborn child – he or she could be listening in. Babies can pick up language skills while they’re still in the womb, Finnish researchers say.

Fetuses exposed to fake words after week 29 in utero were able to distinguish them after being born, according to new research in the Proceedings of the National Academy of Sciences.
"Prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy," the authors wrote in their study. 
As revealed by the allure of the so-called Mozart Effect – the idea that exposing the fetus to classical music earns kids extra IQ points in spatial reasoning down the line – parents are constantly looking for ways to give their children an intelligence advantage.
That’s even if the research their parenting tactics are based on is too narrow to draw such broad conclusions or remains under question (the Mozart Effect was deemed "crap," for example, by one scientist.)
Nonetheless, scientists have discovered plenty of evidence that what’s heard in utero can make a lasting impression. Fetuses respond differently to native and nonnative vowels, and newborns cry with their native language prosody (a combination of rhythm, stress and intonation). Researchers led by Eino Partanen at the University of Helsinki wanted to see what other language cues a fetus might pick up in the womb.
For the experiment, Finnish mothers were asked to play a CD with a pair of four-minute tracks that held music punctuated by a fake word: tatata. On occasion, they changed up the vowel – tatota – and in other instances they switched the pitch – tatata, when the middle syllable could be 8% higher or lower, or 15% higher or lower. The false word and its variants featured hundreds of times as the tracks played, and the mothers were asked to play the CD five to seven times per week.
Then, after several weeks of exposure to the fake word, the researchers had to determine whether all this in-utero training had somehow stuck.
The researchers were relying on a phenomenon called mismatch response: a flash of neural activity when the brain picks up on something off, something not quite right – such as when the word tatata is suddenly tatota. If that flash goes off, it means that something doesn’t make sense compared to what the brain has already learned.
The scientists figured that if the flash went off the first time the infant babies heard the modified words (tatota or tatata) after being born, it would mean that they’d been paying attention while in the womb.
They tested the mismatch response once the babies were born by attaching electrodes and studying their brain activity.
Sure enough, the newborns that had been trained in the womb had a response roughly four times stronger to the pitch change (tatota versus tatata) than untrained newborns. (Both trained and untrained babies picked up the tatata versus tatota vowel distinction.)
The findings could mean it’s possible to give babies a little language leg-up before they ever say a word — particularly the children who may need it most.
"It might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia," the authors wrote.
But, the scientists point out, it could mean that babies are also vulnerable to harmful acoustic effects – “abnormal, unstructured, and novel sound stimulation” – an idea that will also require further study. Until then, perhaps it’s best not to hang around any noisy construction sites while pregnant.

Fetus in womb learns language cues before birth, study finds

Watch your mouth around your unborn child – he or she could be listening in. Babies can pick up language skills while they’re still in the womb, Finnish researchers say.

Fetuses exposed to fake words after week 29 in utero were able to distinguish them after being born, according to new research in the Proceedings of the National Academy of Sciences.

"Prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy," the authors wrote in their study. 

As revealed by the allure of the so-called Mozart Effect – the idea that exposing the fetus to classical music earns kids extra IQ points in spatial reasoning down the line – parents are constantly looking for ways to give their children an intelligence advantage.

That’s even if the research their parenting tactics are based on is too narrow to draw such broad conclusions or remains under question (the Mozart Effect was deemed "crap," for example, by one scientist.)

Nonetheless, scientists have discovered plenty of evidence that what’s heard in utero can make a lasting impression. Fetuses respond differently to native and nonnative vowels, and newborns cry with their native language prosody (a combination of rhythm, stress and intonation). Researchers led by Eino Partanen at the University of Helsinki wanted to see what other language cues a fetus might pick up in the womb.

For the experiment, Finnish mothers were asked to play a CD with a pair of four-minute tracks that held music punctuated by a fake word: tatata. On occasion, they changed up the vowel – tatota – and in other instances they switched the pitch – tatata, when the middle syllable could be 8% higher or lower, or 15% higher or lower. The false word and its variants featured hundreds of times as the tracks played, and the mothers were asked to play the CD five to seven times per week.

Then, after several weeks of exposure to the fake word, the researchers had to determine whether all this in-utero training had somehow stuck.

The researchers were relying on a phenomenon called mismatch response: a flash of neural activity when the brain picks up on something off, something not quite right – such as when the word tatata is suddenly tatota. If that flash goes off, it means that something doesn’t make sense compared to what the brain has already learned.

The scientists figured that if the flash went off the first time the infant babies heard the modified words (tatota or tatata) after being born, it would mean that they’d been paying attention while in the womb.

They tested the mismatch response once the babies were born by attaching electrodes and studying their brain activity.

Sure enough, the newborns that had been trained in the womb had a response roughly four times stronger to the pitch change (tatota versus tatata) than untrained newborns. (Both trained and untrained babies picked up the tatata versus tatota vowel distinction.)

The findings could mean it’s possible to give babies a little language leg-up before they ever say a word — particularly the children who may need it most.

"It might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia," the authors wrote.

But, the scientists point out, it could mean that babies are also vulnerable to harmful acoustic effects – “abnormal, unstructured, and novel sound stimulation” – an idea that will also require further study. Until then, perhaps it’s best not to hang around any noisy construction sites while pregnant.

Filed under language language acquisition brain activity fetus womb neuroscience science

148 notes

Striking Patterns: Skill for Forming Tools and Words Evolved Together



When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.
Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.
In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.
The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.
In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.
The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.
The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”
Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.
That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”
Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.
Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Striking Patterns: Skill for Forming Tools and Words Evolved Together

When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.

Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.

In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.

The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.

In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.

The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.

The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”

Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.

That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”

Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.

Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Filed under language toolmaking tool use brain activity blood flow evolution neuroscience psychology science

335 notes

Learning a new language alters brain development

The age at which children learn a second language can have a significant bearing on the structure of their adult brain, according to a new joint study by the Montreal Neurological Institute and Hospital - The Neuro at McGill University and Oxford University. The majority of people in the world learn to speak more than one language during their lifetime. Many do so with great proficiency particularly if the languages are learned simultaneously or from early in development.

image

The study concludes that the pattern of brain development is similar if you learn one or two language from birth. However, learning a second language later on in childhood after gaining proficiency in the first (native) language does in fact modify the brain’s structure, specifically the brain’s inferior frontal cortex. The left inferior frontal cortex became thicker and the right inferior frontal cortex became thinner. The cortex is a multi-layered mass of neurons that plays a major role in cognitive functions such as thought, language, consciousness and memory.

The study suggests that the task of acquiring a second language after infancy stimulates new neural growth and connections among neurons in ways seen in acquiring complex motor skills such as juggling. The study’s authors speculate that the difficulty that some people have in learning a second language later in life could be explained at the structural level.

“The later in childhood that the second language is acquired, the greater are the changes in the inferior frontal cortex,” said Dr. Denise Klein, researcher in The Neuro’s Cognitive Neuroscience Unit and a lead author on the paper published in the journal Brain and Language. “Our results provide structural evidence that age of acquisition is crucial in laying down the structure for language learning.”

Using a software program developed at The Neuro, the study examined MRI scans of 66 bilingual and 22 monolingual men and women living in Montreal. The work was supported by a grant from the Natural Science and Engineering Research Council of Canada and from an Oxford McGill Neuroscience Collaboration Pilot project.

(Source: mcgill.ca)

Filed under brain development language frontal cortex cognitive function neuroscience psychology science

178 notes

Language can reveal the invisible

It is natural to imagine that the sense of sight takes in the world as it is — simply passing on what the eyes collect from light reflected by the objects around us.

But the eyes do not work alone. What we see is a function not only of incoming visual information, but also how that information is interpreted in light of other visual experiences, and may even be influenced by language.

Words can play a powerful role in what we see, according to a study published this month by UW-Madison cognitive scientist and psychology professor Gary Lupyan, and Emily Ward, a Yale University graduate student, in the journal Proceedings of the National Academy of Sciences.

"Perceptual systems do the best they can with inherently ambiguous inputs by putting them in context of what we know, what we expect," Lupyan says. "Studies like this are helping us show that language is a powerful tool for shaping perceptual systems, acting as a top-down signal to perceptual processes. In the case of vision, what we consciously perceive seems to be deeply shaped by our knowledge and expectations."

And those expectations can be altered with a single word.

To show how deeply words can influence perception, Lupyan and Ward used a technique called continuous flash suppression to render a series of objects invisible for a group of volunteers.

Each person was shown a picture of a familiar object — such as a chair, a pumpkin or a kangaroo — in one eye. At the same time, their other eye saw a series of flashing, “squiggly” lines.

"Essentially, it’s visual noise," Lupyan says. "Because the noise patterns are high-contrast and constantly moving, they dominate, and the input from the other eye is suppressed."

Immediately before looking at the combination of the flashing lines and suppressed object, the study participants heard one of three things: the word for the suppressed object (“pumpkin,” when the object was a pumpkin), the word for a different object (“kangaroo,” when the object was actually a pumpkin), or just static.

Then researchers asked the participants to indicate whether they saw something or not. When the word they heard matched the object that was being wiped out by the visual noise, the subjects were more likely to report that they did indeed see something than in cases where the wrong word or no word at all was paired with the image.

"Hearing the word for the object that was being suppressed boosted that object into their vision," Lupyan says.

And hearing an unmatched word actually hurt study subjects’ chances of seeing an object.

"With the label, you’re expecting pumpkin-shaped things," Lupyan says. "When you get a visual input consistent with that expectation, it boosts it into perception. When you get an incorrect label, it further suppresses that."

Experiments have shown that continuous flash suppression interrupts sight so thoroughly that there are no signals in the brain to suggest the invisible objects are perceived, even implicitly.

"Unless they can tell us they saw it, there’s nothing to suggest the brain was taking it in at all," Lupyan says. "If language affects performance on a test like this, it indicates that language is influencing vision at a pretty early stage. It’s getting really deep into the visual system."

The study demonstrates a deeper connection between language and simple sensory perception than previously thought, and one that makes Lupyan wonder about the extent of language’s power. The influence of language may extend to other senses as well.

"A lot of previous work has focused on vision, and we have neglected to examine the role of knowledge and expectations on other modalities, especially smell and taste," Lupyan says. "What I want to see is whether we can really alter threshold abilities," he says. "Does expecting a particular taste for example, allow you to detect a substance at a lower concentration?"

If you’re drinking a glass of milk, but thinking about orange juice, he says, that may change the way you experience the milk.

"There’s no point in figuring out what some objective taste is," Lupyan says. "What’s important is whether the milk is spoiled or not. If you expect it to be orange juice, and it tastes like orange juice, it’s fine. But if you expected it to be milk, you’d think something was wrong."

(Source: news.wisc.edu)

Filed under language visual representations perception continuous flash suppression neuroscience science

181 notes

Breastfeeding Duration Appears Associated with Intelligence Later in Life
Breastfeeding longer is associated with better receptive language at 3 years of age and verbal and nonverbal intelligence at age 7 years, according to a study published by JAMA Pediatrics, a JAMA Network publication.
Evidence supports the relationship between breastfeeding and health benefits in infancy, but the extent to which breastfeeding leads to better cognitive development is less certain, according to the study background.
Mandy B. Belfort, M.D., M.P.H., of Boston Children’s Hospital, and colleagues examined the relationships of breastfeeding duration and exclusivity with child cognition at ages 3 and 7 years. They also studied the extent to which maternal fish intake during lactation affected associations of infant feeding and later cognition. Researchers used assessment tests to measure cognition.“Longer breastfeeding duration was associated with higher Peabody Picture Vocabulary Test score at age 3 years (0.21; 95% CI, 0.03-0.38 points per month breastfed) and with higher intelligence on the Kaufman Brief Intelligence Test at age 7 years (0.35; 0.16-0.53 verbal points per month breastfed; and 0.29; 0.05-0.54 nonverbal points per month breastfed),” according to the study results. However, the study also noted that breastfeeding duration was not associated with Wide Range Assessment of Memory and Learning scores.
As for fish intake (less than 2 servings per week vs. greater than or equal to 2 servings), the relationship between breastfeeding duration and the Wide Range Assessment of Visual Motor Abilities at 3 years of age appeared to be stronger in children of women with higher vs. lower fish intake, although this finding was not statistically significant, the results also indicate.
“In summary, our results support a causal relationship of breastfeeding in infancy with receptive language at age 3 and with verbal and nonverbal IQ at school age. These findings support national and international recommendations to promote exclusive breastfeeding through age 6 months and continuation of breastfeeding through at least age 1 year,” the authors conclude.
Breastfeeding and Cognition: Can IQ Tip the Scale?
In an editorial, Dimitri A. Christakis, M.D., M.P.H., of the Seattle Children’s Hospital Research Institute, writes: “The authors reported an IQ benefit at age 7 years from breastfeeding of 0.35 points per month on the verbal scale and 0.29 points per month on the nonverbal one. Put another way, breastfeeding an infant for the first year of life would be expected to increase his or her IQ by about four points or one-third of a standard deviation.”
“However, the problem currently is not so much that most women do not initiate breastfeeding, it is that they do not sustain it. In the United States about 70 percent of women overall initiate breastfeeding, although only 50 percent of African American women do. However, by six months, only 35 percent and 20 percent, respectively, are still breastfeeding,” Christakis continues.
“Furthermore, workplaces need to provide opportunities and spaces for mothers to use them. Fourth, breastfeeding in public should be destigmatized. Clever social media campaigns and high-quality public service announcements might help with that. As with lead, some of these actions may require legislative action either at the federal or state level. Let’s allow our children’s cognitive function be the force that tilts the scale, and let’s get on with it,” Christakis concludes.

Breastfeeding Duration Appears Associated with Intelligence Later in Life

Breastfeeding longer is associated with better receptive language at 3 years of age and verbal and nonverbal intelligence at age 7 years, according to a study published by JAMA Pediatrics, a JAMA Network publication.

Evidence supports the relationship between breastfeeding and health benefits in infancy, but the extent to which breastfeeding leads to better cognitive development is less certain, according to the study background.

Mandy B. Belfort, M.D., M.P.H., of Boston Children’s Hospital, and colleagues examined the relationships of breastfeeding duration and exclusivity with child cognition at ages 3 and 7 years. They also studied the extent to which maternal fish intake during lactation affected associations of infant feeding and later cognition. Researchers used assessment tests to measure cognition.“Longer breastfeeding duration was associated with higher Peabody Picture Vocabulary Test score at age 3 years (0.21; 95% CI, 0.03-0.38 points per month breastfed) and with higher intelligence on the Kaufman Brief Intelligence Test at age 7 years (0.35; 0.16-0.53 verbal points per month breastfed; and 0.29; 0.05-0.54 nonverbal points per month breastfed),” according to the study results. However, the study also noted that breastfeeding duration was not associated with Wide Range Assessment of Memory and Learning scores.

As for fish intake (less than 2 servings per week vs. greater than or equal to 2 servings), the relationship between breastfeeding duration and the Wide Range Assessment of Visual Motor Abilities at 3 years of age appeared to be stronger in children of women with higher vs. lower fish intake, although this finding was not statistically significant, the results also indicate.

“In summary, our results support a causal relationship of breastfeeding in infancy with receptive language at age 3 and with verbal and nonverbal IQ at school age. These findings support national and international recommendations to promote exclusive breastfeeding through age 6 months and continuation of breastfeeding through at least age 1 year,” the authors conclude.

Breastfeeding and Cognition: Can IQ Tip the Scale?

In an editorial, Dimitri A. Christakis, M.D., M.P.H., of the Seattle Children’s Hospital Research Institute, writes: “The authors reported an IQ benefit at age 7 years from breastfeeding of 0.35 points per month on the verbal scale and 0.29 points per month on the nonverbal one. Put another way, breastfeeding an infant for the first year of life would be expected to increase his or her IQ by about four points or one-third of a standard deviation.”

“However, the problem currently is not so much that most women do not initiate breastfeeding, it is that they do not sustain it. In the United States about 70 percent of women overall initiate breastfeeding, although only 50 percent of African American women do. However, by six months, only 35 percent and 20 percent, respectively, are still breastfeeding,” Christakis continues.

“Furthermore, workplaces need to provide opportunities and spaces for mothers to use them. Fourth, breastfeeding in public should be destigmatized. Clever social media campaigns and high-quality public service announcements might help with that. As with lead, some of these actions may require legislative action either at the federal or state level. Let’s allow our children’s cognitive function be the force that tilts the scale, and let’s get on with it,” Christakis concludes.

Filed under breastfeeding cognitive development intelligence cognition language neuroscience science

51 notes

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning 
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning

Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Filed under neural activity ERPs N400 effect language language comprehension psychology neuroscience science

206 notes

Scientists identify key to learning new words

For the first time scientists have identified how a pathway in the brain which is unique to humans allows us to learn new words.

The average adult’s vocabulary consists of about 30,000 words. This ability seems unique to humans as even the species closest to us - chimps - manage to learn no more than 100. 

It has long been believed that language learning depends on the integration of hearing and repeating words but the neural mechanisms behind learning new words remained unclear. Previous studies have shown that this may be related to a pathway in the brain only found in humans and that humans can learn only words that they can articulate. 

Now researchers from King’s College London Institute of Psychiatry, in collaboration with Bellvitge Biomedical Research Institute (IDIBELL) and the University of Barcelona, have mapped the neural pathways involved in word learning among humans. They found that the arcuate fasciculus, a collection of nerve fibres connecting auditory regions at the temporal lobe with the motor area located at the frontal lobe in the left hemisphere of the brain, allows the ‘sound’ of a word to be connected to the regions responsible for its articulation. Differences in the development of these auditory-motor connections may explain differences in people’s ability to learn words. 

The results of the study are published in the journal Proceedings of the National Academy of Sciences (PNAS).

Dr Marco Catani, co-author from the NatBrainLab at King’s College London Institute of Psychiatry said: “Often humans take their ability to learn words for granted. This research sheds new light on the unique ability of humans to learn a language, as this pathway is not present in other species. The implications of our findings could be wide ranging – from how language is taught in schools and rehabilitation from injury, to early detection of language disorders such as dyslexia. In addition these findings could have implications for other disorders where language is affected such as autism and schizophrenia.”

The study involved 27 healthy volunteers. Researchers used diffusion tensor imaging to image the structure of the brain before a word learning task and functional MRI, to  detect the regions in the brain that were most active during the task. They found a strong relationship between the ability to remember words and the structure of arcuate fasciculus, which connects two brain areas: the territory of Wernicke, related to auditory language decoding, and Broca’s area, which coordinates the movements associated with speech and the language processing.

In participants able to learn words more successfully their arcuate fasciculus was more myelinated i.e. the nervous tissue facilitated faster conduction of the electrical signal. In addition the activity between the two regions was more co-ordinated in these participants.

Dr Catani concludes, “Now we understand that this is how we learn new words, our concern is that children will have less vocabulary as much of their interaction is via screen, text and email rather than using their external prosthetic memory. This research reinforces the need for us to maintain the oral tradition of talking to our children.”

(Source: kcl.ac.uk)

Filed under language word learning arcuate fasciculus temporal lobe dyslexia diffusion tensor imaging neuroscience science

117 notes

Did Neandertals have language?
A recent study suggests that Neandertals shared speech and language with modern humans
Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.
The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.
Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.
Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Did Neandertals have language?

A recent study suggests that Neandertals shared speech and language with modern humans

Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.

The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.

Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.

Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Filed under Neandertals evolution language modern language linguistics mitochondrial DNA science

free counters