Neuroscience

Articles and news from the latest research reports.

Posts tagged linguistics

1,993 notes

Try, try again? Study says no
When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.
Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.
In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.
“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.
Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.
Too much brainpower
Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.
“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.
In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.
“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.
Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.
Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.
Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.
Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.
The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.
“This research is exciting because it provides evidence indicating that effortful learning leads to different results depending upon the kind of information learners are trying to master,” says Michael Ramscar, a professor of linguistics at the University of Tübingen who was not part of the research team. “The results indicate that learning to identify relatively simple parts of language, such as words, is facilitated by effortful learning, whereas learning more complex aspects of language, such as grammatical features, is impeded by effortful learning.”
Turning off effort
The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.
“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.
Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

“This research is exciting because it provides evidence indicating that effortful learning leads to different results depending upon the kind of information learners are trying to master,” says Michael Ramscar, a professor of linguistics at the University of Tübingen who was not part of the research team. “The results indicate that learning to identify relatively simple parts of language, such as words, is facilitated by effortful learning, whereas learning more complex aspects of language, such as grammatical features, is impeded by effortful learning.”

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

Filed under language learning procedural memory prefrontal cortex linguistics psychology neuroscience science

233 notes

From contemporary syntax to human language’s deep origins



On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.
Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.
From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.
But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?
“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”
Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.
“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”
Introducing the ‘integration hypothesis’
The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.
The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.
The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.
Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.
But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.
However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.
The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”
An inherent capacity
To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.
As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.
“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”
Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.
“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”
Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”
Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.
“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

From contemporary syntax to human language’s deep origins

On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.

Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.

From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.

But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?

“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”

Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.

“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”

Introducing the ‘integration hypothesis’

The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.

The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.

The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.

Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.

But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.

However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.

The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”

An inherent capacity

To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.

As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.

“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”

Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.

“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”

Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”

Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.

“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

Filed under language birdsong evolution linguistics psychology neuroscience science

188 notes

In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

In recognizing speech sounds, the brain does not work the way a computer does

How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 

"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.

"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 

Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 

"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."

Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.

In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 

The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.

While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

Filed under language speech neuroimaging brain activity linguistics psychology neuroscience science

200 notes

Our Brains are Hardwired for Language
People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?
A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.
LANGUAGE UNIVERSALS
Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?
THE EXPERIMENT
To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.
Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).
UNIVERSALLY HARDWIRED BRAINS
The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.
While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.
These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.
The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.
FURTHER EVIDENCE
This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.
The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Our Brains are Hardwired for Language

People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?

A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.

LANGUAGE UNIVERSALS

Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?

THE EXPERIMENT

To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.

Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).

UNIVERSALLY HARDWIRED BRAINS

The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.

While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.

These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.

The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.

FURTHER EVIDENCE

This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.

The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Filed under language broca's area brain activity language universals linguistics psychology neuroscience science

664 notes

Language Structure… You’re Born with It
Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.
THE STUDY
While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.
BLA, ShBA, LBA
Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?
THE EXPERIMENT
These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).
Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.
It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Language Structure… You’re Born with It

Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.

THE STUDY

While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.

BLA, ShBA, LBA

Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?

THE EXPERIMENT

These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).

Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.

It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Filed under language language acquisition speech perception phonology linguistics neuroscience science

117 notes

Did Neandertals have language?
A recent study suggests that Neandertals shared speech and language with modern humans
Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.
The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.
Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.
Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Did Neandertals have language?

A recent study suggests that Neandertals shared speech and language with modern humans

Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.

The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.

Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.

Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Filed under Neandertals evolution language modern language linguistics mitochondrial DNA science

165 notes

Decoding ‘noisy’ language in daily life
Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”
A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.
“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.
Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.
“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”
Asymmetrical strategies
The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.
A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.
“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.
Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.
This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.
“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.
Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.
Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.
“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.
Reverse-engineering the message
Other scholars say the work helps illuminate the strategies people may use when they interpret language.
“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.
According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.
“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Decoding ‘noisy’ language in daily life

Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”

A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.

“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.

Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.

“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”

Asymmetrical strategies

The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.

A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.

“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.

Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.

This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.

“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.

Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.

Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.

“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.

Reverse-engineering the message

Other scholars say the work helps illuminate the strategies people may use when they interpret language.

“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.

According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.

“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Filed under language speech speech perception language processing linguistics psychology neuroscience science

134 notes

Study shows humans and apes learn language differently
How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.
Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.
Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.
Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”
To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.
Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.
This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Study shows humans and apes learn language differently

How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.

Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.

Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.

Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”

To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.

Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.

This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Filed under primates language language development grammatical rules linguistics psychology neuroscience science

94 notes

The great orchestral work of speech
What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. “There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together.”
When we speak, we transform our thoughts into a linear sequence of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca’s area and Wernicke’s area – are involved in both processes, and these form the neurobiological basis of our capacity for language.
The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain “in action” and tries to find out how this complex organ controls the way we speak and understand speech.
Making language visible
Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.
The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of speech production in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.
“Language is an essential component of human culture, which distinguishes us from other species,” says Hagoort. “Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our brain is tuned for language at a very early stage,” stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of speech is based are topics which Hagoort continues to research.

The great orchestral work of speech

What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. “There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together.”

When we speak, we transform our thoughts into a linear sequence of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca’s area and Wernicke’s area – are involved in both processes, and these form the neurobiological basis of our capacity for language.

The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain “in action” and tries to find out how this complex organ controls the way we speak and understand speech.

Making language visible

Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.

The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of speech production in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.

“Language is an essential component of human culture, which distinguishes us from other species,” says Hagoort. “Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our brain is tuned for language at a very early stage,” stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of speech is based are topics which Hagoort continues to research.

Filed under speech production speech language linguistics brain neuroimaging neuroscience science

360 notes

How human language could have evolved from birdsong

Linguistics and biology researchers propose a new theory on the deep roots of human speech.

image

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”

Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.

“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.

The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.

Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.

“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”

These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.

“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”

A new chapter in the songbook

The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.

To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”

Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.

By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.

Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.

“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.

As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”

Birds and bees

Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”

Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.

The researchers acknowledge that further empirical studies on the subject would be desirable.

“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”

Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.

MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.

“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”

(Source: web.mit.edu)

Filed under brain evolution linguistics communication language birdsong neuroscience science

free counters