Neuroscience

Articles and news from the latest research reports.

Posts tagged language

134 notes

Study shows humans and apes learn language differently
How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.
Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.
Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.
Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”
To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.
Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.
This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Study shows humans and apes learn language differently

How do children learn language? Many linguists believe that the stages that a child goes through when learning language mirror the stages of language development in primate evolution. In a paper published in the Proceedings of the National Academy of Sciences, Charles Yang of the University of Pennsylvania suggests that if this is true, then small children and non-human primates would use language the same way. He then uses statistical analysis to prove that this is not the case. The language of small children uses grammar, while language in non-human primates relies on imitation.

Yang examines two hypotheses about language development in children. One of these says that children learn how to put words together by imitating the word combinations of adults. The other states that children learn to combine words by following grammatical rules.

Linguists who support the idea that children are parroting refer to the fact that children appear to combine the same words in the same ways. For example, an English speaker can put either the determiner “a” or the determiner “the” in front of a singular noun. “A door” and “the door” are both grammatically correct, as are “a cat” and “the cat.” However, with most singular nouns, children tend to use either “a” or “the” but not both. This suggests that children are mimicking strings of words without understanding grammatical rules about how to combine the words.

Yang, however, points out that the lack of diversity in children’s word combinations could reflect the way that adults use language. Adults are more likely to use “a” with some words and “the” with others. “The bathroom” is more common than “a bathroom.” “A bath” is more common than “the bath.”

To test this conjecture, Yang analyzed language samples of young children who had just begun making two-word combinations. He calculated the number of different noun-determiner combinations someone would make if they were combining nouns and determiners independently, and found that the diversity of the children’s language matched this profile. He also found that the children’s word combinations were much more diverse than they would be if they were simply imitating word strings.

Yang also studied language diversity in Nim Chimpsky, a chimpanzee who knows American Sign Language. Nim’s word combinations are much less diverse than would be expected if he were combining words independently. This indicates that he is probably mimicking, rather than using grammar.

This difference in language use indicates that human children do not acquire language in the same way that non-human primates do. Young children learn rules of grammar very quickly, while a chimpanzee who has spent many years learning language continues to imitate rather than combine words based on grammatical rules.

Filed under primates language language development grammatical rules linguistics psychology neuroscience science

116 notes

Solving the ‘Cocktail Party Problem’
Many smartphones claim to filter out background noise, but they’ve got nothing on the human brain. We can tune in to just one speaker at a noisy cocktail party with little difficulty—an ability that has been a scientific mystery since the early 1950s. Now, researchers argue that the competing noise of other partygoers is filtered out in the brain before it reaches regions involved in higher cognitive functions, such as language and attention control. Their experiments were the first to demonstrate this process.
The scientists didn’t do anything as social as attend a noisy party. Instead, Charles Schroeder, a psychiatrist at the Columbia University College of Physicians and Surgeons in New York City, and colleagues recorded the brain activity of six people with intractable epilepsy who required brain surgery. In order to identify the part of their brains responsible for seizures, the patients underwent 1 to 4 weeks of observation through electrocorticography (ECoG), a technique that provides precise neural recordings via electrodes placed directly on the surface of the brain. Schroeder and his team, using the ECoG data, conducted their experiments during this time.
The researchers showed the patients two videos simultaneously, each of a person telling a 9- to 12-second story; they were asked to concentrate on just one speaker. To determine which neural recordings corresponded to the “ignored” and “attended” speech, the team reconstructed speech patterns from the brain’s electrical activity using a mathematical model. The scientists then matched the reconstructed patterns with the original patterns coming from the ignored and attended speakers.
The patients’ brains had registered both attended and ignored speech, though they showed some preference for the attended speech, the researchers report online in Neuron. Because the researchers were able to record several regions of the patients’ brains, they saw that regions associated with “higher-order” abilities—like the inferior frontal cortex, which is involved with language—had only representations of attended speech. Moreover, this representation of attended speech improved as the speaker’s story unfolded. These findings support a continuous model of attention—called the “selective entrainment hypothesis”—in which the brain tracks and becomes increasingly selective to a particular voice.
The research supports the selective entrainment hypothesis, agrees Jason Bohland, director of Boston University’s Quantitative Neuroscience Laboratory, but it “doesn’t necessarily tell us how that happens. That’s a really hard question, and is still left very much up in the air.”
Though a technology less-invasive than ECoG would be needed, Bohland and Schroeder agree that this research could help provide good clinical markers for people with certain social disorders. People with attention deficit disorder, for example, may struggle in tracking specific voices or filtering out unwanted neural representations of sounds. And those problems should be represented in their brain activity.
Schroeder explained that this study was a part of a new wave of research that aims to “approximate a map of the total brain circuit that’s involved in [complex] things like speech and music perception, which people consider—rightly or wrongly—to be uniquely human.”

Solving the ‘Cocktail Party Problem’

Many smartphones claim to filter out background noise, but they’ve got nothing on the human brain. We can tune in to just one speaker at a noisy cocktail party with little difficulty—an ability that has been a scientific mystery since the early 1950s. Now, researchers argue that the competing noise of other partygoers is filtered out in the brain before it reaches regions involved in higher cognitive functions, such as language and attention control. Their experiments were the first to demonstrate this process.

The scientists didn’t do anything as social as attend a noisy party. Instead, Charles Schroeder, a psychiatrist at the Columbia University College of Physicians and Surgeons in New York City, and colleagues recorded the brain activity of six people with intractable epilepsy who required brain surgery. In order to identify the part of their brains responsible for seizures, the patients underwent 1 to 4 weeks of observation through electrocorticography (ECoG), a technique that provides precise neural recordings via electrodes placed directly on the surface of the brain. Schroeder and his team, using the ECoG data, conducted their experiments during this time.

The researchers showed the patients two videos simultaneously, each of a person telling a 9- to 12-second story; they were asked to concentrate on just one speaker. To determine which neural recordings corresponded to the “ignored” and “attended” speech, the team reconstructed speech patterns from the brain’s electrical activity using a mathematical model. The scientists then matched the reconstructed patterns with the original patterns coming from the ignored and attended speakers.

The patients’ brains had registered both attended and ignored speech, though they showed some preference for the attended speech, the researchers report online in Neuron. Because the researchers were able to record several regions of the patients’ brains, they saw that regions associated with “higher-order” abilities—like the inferior frontal cortex, which is involved with language—had only representations of attended speech. Moreover, this representation of attended speech improved as the speaker’s story unfolded. These findings support a continuous model of attention—called the “selective entrainment hypothesis”—in which the brain tracks and becomes increasingly selective to a particular voice.

The research supports the selective entrainment hypothesis, agrees Jason Bohland, director of Boston University’s Quantitative Neuroscience Laboratory, but it “doesn’t necessarily tell us how that happens. That’s a really hard question, and is still left very much up in the air.”

Though a technology less-invasive than ECoG would be needed, Bohland and Schroeder agree that this research could help provide good clinical markers for people with certain social disorders. People with attention deficit disorder, for example, may struggle in tracking specific voices or filtering out unwanted neural representations of sounds. And those problems should be represented in their brain activity.

Schroeder explained that this study was a part of a new wave of research that aims to “approximate a map of the total brain circuit that’s involved in [complex] things like speech and music perception, which people consider—rightly or wrongly—to be uniquely human.”

Filed under brain cognitive function cocktail party attention language psychology neuroscience science

94 notes

The great orchestral work of speech
What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. “There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together.”
When we speak, we transform our thoughts into a linear sequence of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca’s area and Wernicke’s area – are involved in both processes, and these form the neurobiological basis of our capacity for language.
The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain “in action” and tries to find out how this complex organ controls the way we speak and understand speech.
Making language visible
Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.
The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of speech production in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.
“Language is an essential component of human culture, which distinguishes us from other species,” says Hagoort. “Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our brain is tuned for language at a very early stage,” stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of speech is based are topics which Hagoort continues to research.

The great orchestral work of speech

What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. “There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together.”

When we speak, we transform our thoughts into a linear sequence of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca’s area and Wernicke’s area – are involved in both processes, and these form the neurobiological basis of our capacity for language.

The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain “in action” and tries to find out how this complex organ controls the way we speak and understand speech.

Making language visible

Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.

The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of speech production in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.

“Language is an essential component of human culture, which distinguishes us from other species,” says Hagoort. “Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our brain is tuned for language at a very early stage,” stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of speech is based are topics which Hagoort continues to research.

Filed under speech production speech language linguistics brain neuroimaging neuroscience science

203 notes

Study shows human brain able to discriminate syllables three months prior to birth
A team of French researchers has discovered that the human brain is capable of distinguishing between different types of syllables as early as three months prior to full term birth. As they describe in their paper published in the Proceedings of the National Academy of Sciences, the team found via brain scans that babies born up to three months premature are capable of some language processing.
Many studies have been conducted on full term babies to try to understand the degree of mental capabilities at birth. Results from such studies have shown that babies are able to distinguish their mother’s voice from others, for example, and can even recognize the elements of short stories. Still puzzling however, is whether some of what newborns are able to demonstrate is innate, or learned immediately after birth. To learn more, the researchers enlisted the assistance of several parents of premature babies and their offspring. Babies born as early as 28 weeks (full term is 37 weeks) had their brains scanned using bedside functional optical imaging, while sounds (soft voices) were played for them.
Three months prior to full term, the team notes, neurons in the brain are still migrating to what will be their final destination locations and initial connections between the upper brain regions are still forming—also neural linkages between the ears and brain are still being created. All of this indicates a brain that is still very much in flux and in the process of becoming the phenomenally complicated mass that humans are known for, which would seem to suggest that very limited if any communication skills would have developed.
The researchers found, however, that even at a time when the brain hasn’t fully developed, the premature infants were able to tell the difference between female versus male voices, and to distinguish between the syllables “ba” and “ga”. They noted also that the same parts of the brain were used by the infants to process sounds as adults. This, the researchers conclude, shows that linguistic connections in the brain develop before birth and because of that do not need to be acquired afterwards, suggesting that at least some abilities are innate.

Study shows human brain able to discriminate syllables three months prior to birth

A team of French researchers has discovered that the human brain is capable of distinguishing between different types of syllables as early as three months prior to full term birth. As they describe in their paper published in the Proceedings of the National Academy of Sciences, the team found via brain scans that babies born up to three months premature are capable of some language processing.

Many studies have been conducted on full term babies to try to understand the degree of mental capabilities at birth. Results from such studies have shown that babies are able to distinguish their mother’s voice from others, for example, and can even recognize the elements of short stories. Still puzzling however, is whether some of what newborns are able to demonstrate is innate, or learned immediately after birth. To learn more, the researchers enlisted the assistance of several parents of premature babies and their offspring. Babies born as early as 28 weeks (full term is 37 weeks) had their brains scanned using bedside functional optical imaging, while sounds (soft voices) were played for them.

Three months prior to full term, the team notes, neurons in the brain are still migrating to what will be their final destination locations and initial connections between the upper brain regions are still forming—also neural linkages between the ears and brain are still being created. All of this indicates a brain that is still very much in flux and in the process of becoming the phenomenally complicated mass that humans are known for, which would seem to suggest that very limited if any communication skills would have developed.

The researchers found, however, that even at a time when the brain hasn’t fully developed, the premature infants were able to tell the difference between female versus male voices, and to distinguish between the syllables “ba” and “ga”. They noted also that the same parts of the brain were used by the infants to process sounds as adults. This, the researchers conclude, shows that linguistic connections in the brain develop before birth and because of that do not need to be acquired afterwards, suggesting that at least some abilities are innate.

Filed under infants premature babies language language processing brain neuroscience psychology science

360 notes

How human language could have evolved from birdsong

Linguistics and biology researchers propose a new theory on the deep roots of human speech.

image

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”

Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.

“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.

The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.

Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.

“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”

These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.

“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”

A new chapter in the songbook

The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.

To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”

Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.

By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.

Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.

“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.

As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”

Birds and bees

Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”

Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.

The researchers acknowledge that further empirical studies on the subject would be desirable.

“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”

Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.

MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.

“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”

(Source: web.mit.edu)

Filed under brain evolution linguistics communication language birdsong neuroscience science

53 notes

“Simplified” brain lets the iCub robot learn language 
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.

“Simplified” brain lets the iCub robot learn language

The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.

Filed under robots robotics humanoids iCub language language processing neural networks ANN neuroscience science

86 notes

Teaching the brain to speak again

Cynthia Thompson, a world-renowned researcher on stroke and brain damage, will discuss her groundbreaking research on aphasia and the neurolinguistic systems it affects Feb. 16 at the annual meeting of the American Association for the Advancement of Science (AAAS). An estimated one million Americans suffer from aphasia, affecting their ability to understand and/or produce spoken and/or written language.

For three decades, Thompson has played a crucial role in demonstrating the brain’s plasticity, or ability to change. “Not long ago, the conventional wisdom was that people only could recover language within three months to a year after the onset of stroke,” she says. “Today we know that, with appropriate training, patients can make gains as much as 10 years or more after a stroke.”

Thompson has probably contributed more findings on the effects of brain damage on language processing and the ways the brain and language recover from stroke than any other single researcher. Her particular interest is agrammatic aphasia, which impairs abstract knowledge of grammatical sentence structure and makes sentence production and understanding difficult.

Among the first researchers to use functional magnetic resonance imaging to study recovery from stroke, Thompson found that behavior treatment that focused on improving impaired language processing affects not only the ability to understand and produce language but also brain activity.

She found shifts in neural activity in both cerebral hemispheres associated with recovery, with the greatest recovery seen in undamaged brain regions within the language network engaged by healthy people, albeit regions recruited for various language activities.

"It’s a matter of ‘use it or lose it,’" Thompson says. "The brain has the capacity to learn and relearn throughout life, and it is directly affected by the activities we engage in. Language training that focuses on principles of normal language processing stimulates the recovery of neural networks that support language."

Thompson will discuss research she will conduct as principal investigator of a $12 million National Institutes of Health Clinical Research Center award to study biomarkers of recovery in aphasia.

Working with investigators from a number of universities, Thompson will explore the role blood flow plays in language recovery in chronic stroke patients. In addition, she will conduct cutting-edge, exploratory research using eye tracking to understand how people compute language as they hear it in real time. Eye-tracking techniques have been found to discern subtle problems underlying language deficits in acquired aphasia.

In a landmark 2010 study, she and colleagues discovered two critical variables related to understanding brain damage recovery. They found that stroke not only results in cell death in certain regions of the brain but that it also decreases blood flow (perfusion) to living cells that are adjacent (and sometimes even distant) to the lesion.

Until that study, hypoperfusion (diminished blood flow) was thought only to be associated with acute stroke. Her team also found that greater hypoperfusion led to poorer recovery.

(Source: eurekalert.org)

Filed under language aphasia brain damage stroke neural activity language processing neuroscience science

68 notes

Training speech networks to treat aphasia
About 80,000 people develop aphasia each year in the United States alone. Nearly all of these individuals have difficulty speaking. For example, some patients (nonfluent aphasics) have trouble producing sounds clearly, making it frustrating for them to speak and difficult for them to be understood. Other patients (fluent aphasics) may select the wrong sound in a word or mix up the order of the sounds. In the latter case, “kitchen” can become “chicken.” Blumstein’s idea is to use guided speech to help people who have suffered stroke-related brain damage to rebuild their neural speech infrastructure.
Blumstein has been studying aphasia and the neural basis of language her whole career. She uses brain imaging, acoustic analysis, and other lab-based techniques to study how the brain maps sound to meaning and meaning to sound.
What Blumstein and other scientists believe is that the brain organizes words into networks, linked both by similarity of meaning and similarity of sound. To say “pear,” a speaker will also activate other competing words like “apple” (which competes in meaning) and “bear”(which competes in sound). Despite this competition, normal speakers are able to select the correct word.
In a study published in the Journal of Cognitive Neuroscience in 2010, for example, she and her co-authors used functional magnetic resonance imaging to track neural activation patterns in the brains of 18 healthy volunteers as they spoke English words that had similar sounding “competitors” (“cape” and “gape” differ subtly in the first consonant by voicing, i.e. the timing of the onset of vocal cord vibration). Volunteers also spoke words without similar sounding competitors (“cake” has no voiced competitor in English; gake is not a word). What the researchers found is that neural activation within a network of brain regions was modulated differently when subjects said words that had competitors versus words that did not.
One way this competition-mediated difference is apparent in speech production is that words with competitors are produced differently from words that do not have competitors. For example, the voicing of the “t” in “tot” (with a voiced competitor ‘dot’) is produced with more voicing than the “t” in “top” (there is no ‘dop’ in English). Through acoustic analysis of the speech of people with aphasia, Blumstein has shown that this difference persists, suggesting that their word networks are still largely intact.

Training speech networks to treat aphasia

About 80,000 people develop aphasia each year in the United States alone. Nearly all of these individuals have difficulty speaking. For example, some patients (nonfluent aphasics) have trouble producing sounds clearly, making it frustrating for them to speak and difficult for them to be understood. Other patients (fluent aphasics) may select the wrong sound in a word or mix up the order of the sounds. In the latter case, “kitchen” can become “chicken.” Blumstein’s idea is to use guided speech to help people who have suffered stroke-related brain damage to rebuild their neural speech infrastructure.

Blumstein has been studying aphasia and the neural basis of language her whole career. She uses brain imaging, acoustic analysis, and other lab-based techniques to study how the brain maps sound to meaning and meaning to sound.

What Blumstein and other scientists believe is that the brain organizes words into networks, linked both by similarity of meaning and similarity of sound. To say “pear,” a speaker will also activate other competing words like “apple” (which competes in meaning) and “bear”(which competes in sound). Despite this competition, normal speakers are able to select the correct word.

In a study published in the Journal of Cognitive Neuroscience in 2010, for example, she and her co-authors used functional magnetic resonance imaging to track neural activation patterns in the brains of 18 healthy volunteers as they spoke English words that had similar sounding “competitors” (“cape” and “gape” differ subtly in the first consonant by voicing, i.e. the timing of the onset of vocal cord vibration). Volunteers also spoke words without similar sounding competitors (“cake” has no voiced competitor in English; gake is not a word). What the researchers found is that neural activation within a network of brain regions was modulated differently when subjects said words that had competitors versus words that did not.

One way this competition-mediated difference is apparent in speech production is that words with competitors are produced differently from words that do not have competitors. For example, the voicing of the “t” in “tot” (with a voiced competitor ‘dot’) is produced with more voicing than the “t” in “top” (there is no ‘dop’ in English). Through acoustic analysis of the speech of people with aphasia, Blumstein has shown that this difference persists, suggesting that their word networks are still largely intact.

Filed under aphasia brain damage language speech production neuroimaging neuroscience science

102 notes

Roots of language in human and bird biology
The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.
These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.
"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.
Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.
From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.
Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE
Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language
(Image: iStock)

Roots of language in human and bird biology

The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.

These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.

"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.

Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.

From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.

Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE

Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language

(Image: iStock)

Filed under language language production speech vocalizations songbirds vocal learning neuroscience science

671 notes

Bilingual babies know their grammar by 7 months
Babies as young as seven months can distinguish between, and begin to learn, two languages with vastly different grammatical structures, according to new research from the University of British Columbia and Université Paris Descartes.
Published today in the journal Nature Communications and presented at the 2013 Annual Meeting of the American Association for the Advancement of Science (AAAS) in Boston, the study shows that infants in bilingual environments use pitch and duration cues to discriminate between languages – such as English and Japanese – with opposite word orders.
In English, a function word comes before a content word (the dog, his hat, with friends, for example) and the duration of the content word is longer, while in Japanese or Hindi, the order is reversed, and the pitch of the content word higher.
"By as early as seven months, babies are sensitive to these differences and use these as cues to tell the languages apart," says UBC psychologist Janet Werker, co-author of the study.
Previous research by Werker and Judit Gervain, a linguist at the Université Paris Descartes and co-author of the new study, showed that babies use frequency of words in speech to discern their significance.
"For example, in English the words ‘the’ and ‘with’ come up a lot more frequently than other words – they’re essentially learning by counting," says Gervain. "But babies growing up bilingual need more than that, so they develop new strategies that monolingual babies don’t necessarily need to use."
"If you speak two languages at home, don’t be afraid, it’s not a zero-sum game," says Werker. "Your baby is very equipped to keep these languages separate and they do so in remarkable ways."

Bilingual babies know their grammar by 7 months

Babies as young as seven months can distinguish between, and begin to learn, two languages with vastly different grammatical structures, according to new research from the University of British Columbia and Université Paris Descartes.

Published today in the journal Nature Communications and presented at the 2013 Annual Meeting of the American Association for the Advancement of Science (AAAS) in Boston, the study shows that infants in bilingual environments use pitch and duration cues to discriminate between languages – such as English and Japanese – with opposite word orders.

In English, a function word comes before a content word (the dog, his hat, with friends, for example) and the duration of the content word is longer, while in Japanese or Hindi, the order is reversed, and the pitch of the content word higher.

"By as early as seven months, babies are sensitive to these differences and use these as cues to tell the languages apart," says UBC psychologist Janet Werker, co-author of the study.

Previous research by Werker and Judit Gervain, a linguist at the Université Paris Descartes and co-author of the new study, showed that babies use frequency of words in speech to discern their significance.

"For example, in English the words ‘the’ and ‘with’ come up a lot more frequently than other words – they’re essentially learning by counting," says Gervain. "But babies growing up bilingual need more than that, so they develop new strategies that monolingual babies don’t necessarily need to use."

"If you speak two languages at home, don’t be afraid, it’s not a zero-sum game," says Werker. "Your baby is very equipped to keep these languages separate and they do so in remarkable ways."

Filed under infants bilingual language language acquisition prosodic cues psychology neuroscience science

free counters