Neuroscience

Articles and news from the latest research reports.

Posts tagged language

200 notes

Our Brains are Hardwired for Language
People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?
A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.
LANGUAGE UNIVERSALS
Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?
THE EXPERIMENT
To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.
Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).
UNIVERSALLY HARDWIRED BRAINS
The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.
While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.
These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.
The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.
FURTHER EVIDENCE
This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.
The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Our Brains are Hardwired for Language

People blog, they don’t lbog, and they schmooze, not mshooze. But why is this? Why are human languages so constrained? Can such restrictions unveil the basis of the uniquely human capacity for language?

A groundbreaking study published in PLOS ONE by Prof. Iris Berent of Northeastern University and researchers at Harvard Medical School shows the brains of individual speakers are sensitive to language universals. Syllables that are frequent across languages are recognized more readily than infrequent syllables. Simply put, this study shows that language universals are hardwired in the human brain.

LANGUAGE UNIVERSALS

Language universals have been the subject of intense research, but their basis remains elusive. Indeed, the similarities between human languages could result from a host of reasons that are tangential to the language system itself. Syllables like lbog, for instance, might be rare due to sheer historical forces, or because they are just harder to hear and articulate. A more interesting possibility, however, is that these facts could stem from the biology of the language system. Could the unpopularity of lbogs result from universal linguistic principles that are active in every human brain?

THE EXPERIMENT

To address this question, Dr. Berent and her colleagues examined the response of human brains to distinct syllable types—either ones that are frequent across languages (e.g., blif, bnif), or infrequent (e.g., bdif, lbif). In the experiment, participants heard one auditory stimulus at a time (e.g., lbif), and were then asked to determine whether the stimulus includes one syllable or two while their brain was simultaneously imaged.

Results showed the syllables that were infrequent and ill-formed, as determined by their linguistic structure, were harder for people to process. Remarkably, a similar pattern emerged in participants’ brain responses: worse-formed syllables (e.g., lbif) exerted different demands on the brain than syllables that are well-formed (e.g., blif).

UNIVERSALLY HARDWIRED BRAINS

The localization of these patterns in the brain further sheds light on their origin. If the difficulty in processing syllables like lbif were solely due to unfamiliarity, failure in their acoustic processing, and articulation, then such syllables are expected to only exact cost on regions of the brain associated with memory for familiar words, audition, and motor control. In contrast, if the dislike of lbif reflects its linguistic structure, then the syllable hierarchy is expected to engage traditional language areas in the brain.

While syllables like lbif did, in fact, tax auditory brain areas, they exerted no measurable costs with respect to either articulation or lexical processing. Instead, it was Broca’s area—a primary language center of the brain—that was sensitive to the syllable hierarchy.

These results show for the first time that the brains of individual speakers are sensitive to language universals: the brain responds differently to syllables that are frequent across languages (e.g., bnif) relative to syllables that are infrequent (e.g., lbif). This is a remarkable finding given that participants (English speakers) have never encountered most of those syllables before, and it shows that language universals are encoded in human brains.

The fact that the brain activity engaged Broca’s area—a traditional language area—suggests that this brain response might be due to a linguistic principle. This result opens up the possibility that human brains share common linguistic restrictions on the sound pattern of language.

FURTHER EVIDENCE

This proposal is further supported by a second study that recently appeared in the Proceedings of the National Academy of Science, also co-authored by Dr. Berent. This study shows that, like their adult counterparts, newborns are sensitive to the universal syllable hierarchy.

The findings from newborns are particularly striking because they have little to no experience with any such syllable. Together, these results demonstrate that the sound patterns of human language reflect shared linguistic constraints that are hardwired in the human brain already at birth.

Filed under language broca's area brain activity language universals linguistics psychology neuroscience science

240 notes

Study provides new insight into how toddlers learn verbs
Parents can help toddlers’ language skills by showing them a variety of examples of different actions, according to new research from the University of Liverpool.
Previous research has shown that verbs pose particular difficulties to toddlers as they refer to actions rather than objects, and actions are often different each time a child sees them.
To find out more about this area of child language, University psychologists asked a group of toddlers to watch one of two short videos.
They then examined whether watching a cartoon star repeat the same action, compared to a character performing three different actions, affected the children’s understanding of verbs.
Developmental psychologist, Dr Katherine Twomey, said: “Knowledge of how children start to learn language is important to our understanding of how they progress throughout preschool and school years.
“This is the first study to indicate that showing toddlers similar but, importantly, not identical actions actually helped them understand what a verb refers to, instead of confusing them as you might expect.”
Dr Jessica Horst from the University of Sussex who collaborated on the research added: “It is a crucial first step in understanding how what children see affects how they learn verbs and action categories, and provides the groundwork for future studies to examine in more detail exactly what kinds of variability affect how children learn words.”

Study provides new insight into how toddlers learn verbs

Parents can help toddlers’ language skills by showing them a variety of examples of different actions, according to new research from the University of Liverpool.

Previous research has shown that verbs pose particular difficulties to toddlers as they refer to actions rather than objects, and actions are often different each time a child sees them.

To find out more about this area of child language, University psychologists asked a group of toddlers to watch one of two short videos.

They then examined whether watching a cartoon star repeat the same action, compared to a character performing three different actions, affected the children’s understanding of verbs.

Developmental psychologist, Dr Katherine Twomey, said: “Knowledge of how children start to learn language is important to our understanding of how they progress throughout preschool and school years.

“This is the first study to indicate that showing toddlers similar but, importantly, not identical actions actually helped them understand what a verb refers to, instead of confusing them as you might expect.”

Dr Jessica Horst from the University of Sussex who collaborated on the research added: “It is a crucial first step in understanding how what children see affects how they learn verbs and action categories, and provides the groundwork for future studies to examine in more detail exactly what kinds of variability affect how children learn words.”

Filed under language language acquisition child development verb learning psychology neuroscience science

134 notes

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned

In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.

While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.

“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).

Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.

“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.

The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.

“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”

(Image: iStockphoto)

Filed under brain structure language hearing auditory cortex deafness neuroscience science

664 notes

Language Structure… You’re Born with It
Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.
THE STUDY
While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.
BLA, ShBA, LBA
Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?
THE EXPERIMENT
These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).
Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.
It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Language Structure… You’re Born with It

Humans are unique in their ability to acquire language. But how? A new study published in the Proceeding of the National Academy of Sciences shows that we are in fact born with the basic fundamental knowledge of language, thus shedding light on the age-old linguistic “nature vs. nurture” debate.

THE STUDY

While languages differ from each other in many ways, certain aspects appear to be shared across languages. These aspects might stem from linguistic principles that are active in all human brains. A natural question then arises: are infants born with knowledge of how the human words might sound like? Are infants biased to consider certain sound sequences as more word-like than others? “The results of this new study suggest that, the sound patterns of human languages are the product of an inborn biological instinct, very much like birdsong,” said Prof. Iris Berent of Northeastern University in Boston, who co-authored the study with a research team from the International School of Advanced Studies in Italy, headed by Dr. Jacques Mehler. The study’s first author is Dr. David Gómez.

BLA, ShBA, LBA

Consider, for instance, the sound-combinations that occur at the beginning of words. While many languages have words that begin by bl (e.g., blando in Italian, blink in English, and blusa in Spanish), few languages have words that begin with lb. Russian is such a language (e.g., lbu, a word related to lob, “forehead”), but even in Russian such words are extremely rare and outnumbered by words starting with bl. Linguists have suggested that such patterns occur because human brains are biased to favor syllables such as bla over lba. In line with this possibility, past experimental research from Dr. Berent’s lab has shown that adult speakers display such preferences, even if their native language has no words resembling either bla or lba. But where does this knowledge stem from? Is it due to some universal linguistic principle, or to adults’ lifelong experience with listening and producing their native language?

THE EXPERIMENT

These questions motivated our team to look carefully at how young babies perceive different types of words. We used near-infrared spectroscopy, a silent and non-invasive technique that tells us how the oxygenation of the brain cortex (those very first centimeters of gray matter just below the scalp) changes in time, to look at the brain reactions of Italian newborn babies when listening to good and bad word candidates as described above (e.g., blif, lbif).

Working with Italian newborn infants and their families, we observed that newborns react differently to good and bad word candidates, similar to what adults do. Young infants have not learned any words yet, they do not even babble yet, and still they share with us a sense of how words should sound. This finding shows that we are born with the basic, foundational knowledge about the sound pattern of human languages.

It is hard to imagine how differently languages would sound if humans did not share such type of knowledge. We are fortunate that we do, and so our babies can come to the world with the certainty that they will readily recognize the sound patterns of words–no matter the language they will grow up with.

Filed under language language acquisition speech perception phonology linguistics neuroscience science

160 notes

Some innate preferences shape the sound of words from birth
Languages are learned, it’s true, but are there also innate bases in the structure of language that precede experience? Linguists have noticed that, despite the huge variability of human languages, here are some preferences in the sound of words that can be found across languages. So they wonder whether this reflects the existence of a universal, innate biological basis of language. A SISSA study provides evidence to support this hypothesis, demonstrating that certain preferences in the sound of words are already active in newborn infants.
Take the sound “bl”: how many words starting with that sound can you think of? Blouse, blue, bland… Now try with “lb”: how many can you find? None in English and Italian, and even in other languages such words either don’t exist or are extremely rare. Human languages offer several examples of this kind, and this indicates that in forming words we tend to prefer certain sound combinations to others, irrespective of which language we speak. The fact that this occurs across languages has prompted linguists to hypothesize the existence of biological bases of language (in born and universal) which precede language learning in humans. Finding evidence to support his hypothesis is, however, far from easy and the debate between the proponents of this view and those who believe that language is merely the result of learning is still open. But proof supporting the “universalist” hypothesis has now been provided by a new study conducted by a research team of the International School for Advanced Studies (SISSA) in Trieste and just published in the journal PNAS.
David Gomez, a SISSA research scientist working under the supervision of Jacques Mehler and first author of the paper, and his coworkers decided to observe the brain activity of newborns. “In fact, if it is possible to demonstrate that these preferences are already present within days from birth, when the newborn baby is still unable to speak and presumably has very limited language knowledge, then we can infer that there is an inborn bias that prefers certain words to others”, comments Gomez.
“To monitor the newborns’ brain activity we used a non-invasive technique, i.e., functional near-infrared spectroscopy”, explains Marina Nespor, a SISSA neuroscientist who participated in the study. During the experiments the newborns would listen to words starting with normally “preferred” sounds (like “bl”) and others with  uncommon sounds (“lb”). “What we found was that the newborns’ brains reacted in a significantly different manner to the two types of sound” continues Nespor.
“The brain regions that are activated while the newborns are listening react differently in the two cases”, comments Gomez, “and reflect the preferences observed across languages, as well as the behavioural responses recorded in similar experiments carried out in adults”. “It’s difficult to imagine what languages would sound like if humans didn’t share a common knowledge base”, concludes Gomez. “We are lucky that this common base exists. This way, our children are born with an ability to distinguish words from “non-words” ever since birth, regardless of which language they will then go on to learn”.

Some innate preferences shape the sound of words from birth

Languages are learned, it’s true, but are there also innate bases in the structure of language that precede experience? Linguists have noticed that, despite the huge variability of human languages, here are some preferences in the sound of words that can be found across languages. So they wonder whether this reflects the existence of a universal, innate biological basis of language. A SISSA study provides evidence to support this hypothesis, demonstrating that certain preferences in the sound of words are already active in newborn infants.

Take the sound “bl”: how many words starting with that sound can you think of? Blouse, blue, bland… Now try with “lb”: how many can you find? None in English and Italian, and even in other languages such words either don’t exist or are extremely rare. Human languages offer several examples of this kind, and this indicates that in forming words we tend to prefer certain sound combinations to others, irrespective of which language we speak. The fact that this occurs across languages has prompted linguists to hypothesize the existence of biological bases of language (in born and universal) which precede language learning in humans. Finding evidence to support his hypothesis is, however, far from easy and the debate between the proponents of this view and those who believe that language is merely the result of learning is still open. But proof supporting the “universalist” hypothesis has now been provided by a new study conducted by a research team of the International School for Advanced Studies (SISSA) in Trieste and just published in the journal PNAS.

David Gomez, a SISSA research scientist working under the supervision of Jacques Mehler and first author of the paper, and his coworkers decided to observe the brain activity of newborns. “In fact, if it is possible to demonstrate that these preferences are already present within days from birth, when the newborn baby is still unable to speak and presumably has very limited language knowledge, then we can infer that there is an inborn bias that prefers certain words to others”, comments Gomez.

“To monitor the newborns’ brain activity we used a non-invasive technique, i.e., functional near-infrared spectroscopy”, explains Marina Nespor, a SISSA neuroscientist who participated in the study. During the experiments the newborns would listen to words starting with normally “preferred” sounds (like “bl”) and others with  uncommon sounds (“lb”). “What we found was that the newborns’ brains reacted in a significantly different manner to the two types of sound” continues Nespor.

“The brain regions that are activated while the newborns are listening react differently in the two cases”, comments Gomez, “and reflect the preferences observed across languages, as well as the behavioural responses recorded in similar experiments carried out in adults”. “It’s difficult to imagine what languages would sound like if humans didn’t share a common knowledge base”, concludes Gomez. “We are lucky that this common base exists. This way, our children are born with an ability to distinguish words from “non-words” ever since birth, regardless of which language they will then go on to learn”.

Filed under language language acquisition speech perception brain activity psychology neuroscience science

220 notes

Why do some neurons respond so selectively to words, objects and faces?

So why do neurons respond in this remarkable way? A new study by Professor Jeff Bowers and colleagues at the University of Bristol argues that highly selective neural representations are well suited to co-activating multiple things, such as words, objects and faces, at the same time in short-term memory. 

image

The researchers trained an artificial neural network to remember words in short-term memory. Like a brain, the network was composed of a set of interconnected units that activated in response to inputs; the network ‘learnt’ by changing the strength of connections between units. The researchers then recorded the activation of the units in response to a number of different words.

When the network was trained to store one word at a time in short-term memory, it learned highly distributed codes such that each unit responded to many different words. However, when it was trained to store multiple words at the same time in short-term memory it learned highly selective (‘grandmother cell’) units – that is, after training, single units responded to one word but not any other. This is much like the neurons in the cortex that respond to one face amongst many.

Why did the network learn such highly specific representations when trained to co-activate multiple words at the same time? Professor Bowers and colleagues argue that the non-selective representations can support memory for a single word, given that a pattern of activation across many non-selective units can uniquely represent a specific word. However, when multiple patterns are mixed together, the resulting blend pattern is often ambiguous (the so-called ‘superposition catastrophe’).

This ambiguity is easily avoided, however, when the network learns to represent words in a highly selective manner, for example, if one unit codes for the word RACHEL, another for MONICA, and yet another JOEY, there is no ambiguity when the three units are co-activated.

Professor Bowers said: “Our research provides a possible explanation for the discovery that single neurons in the cortex respond to information in a highly selective manner. It’s possible that the cortex learns highly selective codes in order to support short-term memory.”

The study is published in Psychological Review.

(Source: bristol.ac.uk)

Filed under neural networks grandmother cells neurons language memory STM psychology neuroscience science

161 notes

Speech means using both sides of our brain

We use both sides of our brain for speech, a finding by researchers at New York University and NYU Langone Medical Center that alters previous conceptions about neurological activity. The results, which appear in the journal Nature, also offer insights into addressing speech-related inhibitions caused by stroke or injury and lay the groundwork for better rehabilitation methods.

image

“Our findings upend what has been universally accepted in the scientific community—that we use only one side of our brains for speech,” says Bijan Pesaran, an associate professor in NYU’s Center for Neural Science and the study’s senior author. “In addition, now that we have a firmer understanding of how speech is generated, our work toward finding remedies for speech afflictions is much better informed.”

Many in the scientific community have posited that both speech and language are lateralized—that is, we use only one side of our brains for speech, which involves listening and speaking, and language, which involves constructing and understanding sentences. However, the conclusions pertaining to speech generally stem from studies that rely on indirect measurements of brain activity, raising questions about characterizing speech as lateralized.

To address this matter, the researchers directly examined the connection between speech and the neurological process.

Specifically, the study relied on data collected at NYU ECoG, a center where brain activity is recorded directly from patients implanted with specialized electrodes placed directly inside and on the surface of the brain while the patients are performing sensory and cognitive tasks. Here, the researchers examined brain functions of patients suffering from epilepsy by using methods that coincided with their medical treatment.

“Recordings directly from the human brain are a rare opportunity,” says Thomas Thesen, director of the NYU ECoG Center and co-author of the study.

“As such, they offer unparalleled spatial and temporal resolution over other imaging technologies to help us achieve a better understanding of complex and uniquely human brain functions, such as language,” adds Thesen, an assistant professor at NYU Langone.

In their examination, the researchers tested the parts of the brain that were used during speech. Here, the study’s subjects were asked to repeat two “non-words”—“kig” and “pob.” Using non-words as a prompt to gauge neurological activity, the researchers were able to isolate speech from language.

An analysis of brain activity as patients engaged in speech tasks showed that both sides of the brain were used—that is, speech is, in fact, bi-lateral.

“Now that we have greater insights into the connection between the brain and speech, we can begin to develop new ways to aid those trying to regain the ability to speak after a stroke or injuries resulting in brain damage,” observes Pesaran. “With this greater understanding of the speech process, we can retool rehabilitation methods in ways that isolate speech recovery and that don’t involve language.”

(Source: nyu.edu)

Filed under speech language brain activity neuroimaging neuroscience science

6,730 notes

Epilepsy drug turns out to help adults acquire perfect pitch and learn language like kids
A team of researchers from across the globe believe they have discovered a means of re-opening “critical periods” in brain development, allowing adults to acquire abilities — such as perfect pitch or fluency in language — that could previously only be acquired early in life.
According to the study in Frontiers in Systems Neuroscience, the mood-stabilizing drug valproate allows the adult brain to absorb new information as effortlessly as it did during critical windows in childhood.
A critical period is “a fixed window of time, usually early in an organism’s lifespan, during which experience has lasting effects on the development of brain function and behavior.” They are, for example, what allows children to enter into language without any formal training in grammar or vocabulary.
The researchers postulated that because such periods close when enzymes “impose ‘brakes’ on neuroplasticity,” a drug that blocks the productions of those enzymes might be able to “reopen critical-period neuroplasticity.”
Read more

Epilepsy drug turns out to help adults acquire perfect pitch and learn language like kids

A team of researchers from across the globe believe they have discovered a means of re-opening “critical periods” in brain development, allowing adults to acquire abilities — such as perfect pitch or fluency in language — that could previously only be acquired early in life.

According to the study in Frontiers in Systems Neuroscience, the mood-stabilizing drug valproate allows the adult brain to absorb new information as effortlessly as it did during critical windows in childhood.

A critical period is “a fixed window of time, usually early in an organism’s lifespan, during which experience has lasting effects on the development of brain function and behavior.” They are, for example, what allows children to enter into language without any formal training in grammar or vocabulary.

The researchers postulated that because such periods close when enzymes “impose ‘brakes’ on neuroplasticity,” a drug that blocks the productions of those enzymes might be able to “reopen critical-period neuroplasticity.”

Read more

Filed under brain development language valproate critical period neuroplasticity neuroscience science

147 notes

Brain research provides insight into language learning

Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

image

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.  

The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”

“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”

Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.

“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.

“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.   

“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”

While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said. 

“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.

Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.

(Source: asunews.asu.edu)

Filed under mirror neuron system language acquisition language learning plasticity neuroscience science

244 notes

Researchers map brain areas vital to understanding language
When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.
The study appears in Brain: A Journal of Neurology.
With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.
To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.
“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.
A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.
The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.
“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.
“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.
“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Researchers map brain areas vital to understanding language

When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.

The study appears in Brain: A Journal of Neurology.

With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.

To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.

“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.

A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.

The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.

“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.

“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.

“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Filed under discourse comprehension cerebral cortex language language processing neuroimaging neuroscience science

free counters