Neuroscience

Articles and news from the latest research reports.

Posts tagged language

60 notes

Explaining the origins of word order using information theory
The majority of languages — roughly 85 percent of them — can be sorted into two categories: those, like English, in which the basic sentence form is subject-verb-object (“the girl kicks the ball”), and those, like Japanese, in which the basic sentence form is subject-object-verb (“the girl the ball kicks”).
The reason for the difference has remained somewhat mysterious, but researchers from MIT’s Department of Brain and Cognitive Sciences now believe that they can account for it using concepts borrowed from information theory, the discipline, invented almost singlehandedly by longtime MIT professor Claude Shannon, that led to the digital revolution in communications. The researchers will present their hypothesis in an upcoming issue of the journal  Psychological Science.
Shannon was largely concerned with faithful communication in the presence of “noise” — any external influence that can corrupt a message on its way from sender to receiver. Ted Gibson, a professor of cognitive sciences at MIT and corresponding author on the new paper, argues that human speech is an example of what Shannon called a “noisy channel.”
“If I’m getting an idea across to you, there’s noise in what I’m saying,” Gibson says. “I may not say what I mean — I pick up the wrong word, or whatever. Even if I say something right, you may hear the wrong thing. And then there’s ambient stuff in between on the signal, which can screw us up. It’s a real problem.” In their paper, the MIT researchers argue that languages develop the word order rules they do in order to minimize the risk of miscommunication across a noisy channel.
[E. Gibson, S.T. Piantadosi, K. Brink, L. Bergen, E. Lim, and R. Saxe. A noisy-channel account of crosslinguistic word order variation. Psychological Science, accepted, 2012]

Explaining the origins of word order using information theory

The majority of languages — roughly 85 percent of them — can be sorted into two categories: those, like English, in which the basic sentence form is subject-verb-object (“the girl kicks the ball”), and those, like Japanese, in which the basic sentence form is subject-object-verb (“the girl the ball kicks”).

The reason for the difference has remained somewhat mysterious, but researchers from MIT’s Department of Brain and Cognitive Sciences now believe that they can account for it using concepts borrowed from information theory, the discipline, invented almost singlehandedly by longtime MIT professor Claude Shannon, that led to the digital revolution in communications. The researchers will present their hypothesis in an upcoming issue of the journal Psychological Science.

Shannon was largely concerned with faithful communication in the presence of “noise” — any external influence that can corrupt a message on its way from sender to receiver. Ted Gibson, a professor of cognitive sciences at MIT and corresponding author on the new paper, argues that human speech is an example of what Shannon called a “noisy channel.”

“If I’m getting an idea across to you, there’s noise in what I’m saying,” Gibson says. “I may not say what I mean — I pick up the wrong word, or whatever. Even if I say something right, you may hear the wrong thing. And then there’s ambient stuff in between on the signal, which can screw us up. It’s a real problem.” In their paper, the MIT researchers argue that languages develop the word order rules they do in order to minimize the risk of miscommunication across a noisy channel.

[E. Gibson, S.T. Piantadosi, K. Brink, L. Bergen, E. Lim, and R. Saxe. A noisy-channel account of crosslinguistic word order variation. Psychological Science, accepted, 2012]

Filed under language information theory miscommunication communication word order neuroscience science

25 notes

A new field of developmental neuroscience changes our understanding of the early years of human life

Biological Embedding of Early Social Adversity: From Fruit Flies to Kindergartners, a special volume published in the Proceedings of the National Academy of Sciences (1, 2, 3) and authored largely by researchers of the Canadian Institute for Advanced Research (CIFAR), sets out an emerging new field of the developmental science of childhood adversity.
The implications of the research are far reaching, from new approaches to learning and language acquisition, to new considerations for the health effects of social environments affecting large populations, and policies for early childhood care and education.
"CIFAR’s multidisciplinary and international program in early childhood development is transforming our understanding of how early life experiences affect the development of the brain and in so doing set a lifelong trajectory," says Dr. Alan Bernstein, CIFAR President & CEO. "This research is providing the scientific basis for public policy concerning the critical window to provide the optimal conditions that will enable our children to grow up to be well-adjusted, well-educated and productive individuals."

A new field of developmental neuroscience changes our understanding of the early years of human life

Biological Embedding of Early Social Adversity: From Fruit Flies to Kindergartners, a special volume published in the Proceedings of the National Academy of Sciences (1, 2, 3) and authored largely by researchers of the Canadian Institute for Advanced Research (CIFAR), sets out an emerging new field of the developmental science of childhood adversity.

The implications of the research are far reaching, from new approaches to learning and language acquisition, to new considerations for the health effects of social environments affecting large populations, and policies for early childhood care and education.

"CIFAR’s multidisciplinary and international program in early childhood development is transforming our understanding of how early life experiences affect the development of the brain and in so doing set a lifelong trajectory," says Dr. Alan Bernstein, CIFAR President & CEO. "This research is providing the scientific basis for public policy concerning the critical window to provide the optimal conditions that will enable our children to grow up to be well-adjusted, well-educated and productive individuals."

Filed under brain development developmental neuroscience language language acquisition neuroscience psychology science

15,151 notes

A new font tailored for people afflicted with dyslexia is now available for use on mobile devices, thanks to a design by Abelardo Gonzalez, a mobile app designer from New Hampshire. Gonzalez, in collaboration with educators, has selected a font that many people with dyslexia find easier to read. Even better, the new font is free and has already been made available for some word processors and ebook readers. The font, called OpenDyslexic, has also been added to the font choices used by Instapaper—a program that allows users to copy a web page and save it to their hard drive.

A new font tailored for people afflicted with dyslexia is now available for use on mobile devices, thanks to a design by Abelardo Gonzalez, a mobile app designer from New Hampshire. Gonzalez, in collaboration with educators, has selected a font that many people with dyslexia find easier to read. Even better, the new font is free and has already been made available for some word processors and ebook readers. The font, called OpenDyslexic, has also been added to the font choices used by Instapaper—a program that allows users to copy a web page and save it to their hard drive.

Filed under brain language dyslexia neuroscience psychology education science

41 notes

New research published in Psychological Science, a journal of the Association for Psychological Science, examines the nuanced relationship between language and different types of perception.
Bilingual Infants Can Tell Unfamiliar Languages Apart
Speaking more than one language can improve our ability to control our behavior and focus our attention, recent research has shown. But are there any advantages for bilingual children before they can speak in full sentences? We know that bilingual children can tell if a person is speaking one of their native languages or the other, even when there is no sound, by watching the speaker’s mouth for visual cues. But Núria Sebastián-Gallés of Universitat Pompeu Fabra and colleagues wanted to know whether bilingual infants could also do this with two unfamiliar languages. They studied 8-month-old infants, half of whom lived in either Spanish- or Catalan-speaking households and half of whom lived in Spanish-Catalan bilingual households. The researchers looked at whether the infants could discriminate between English and French, two unfamiliar languages, using only visual cues. They found that the bilingual infants could tell the difference between the two languages, while the infants who lived in single-language households could not. These findings suggest that infants who are immersed in bilingual environments are more sensitive to the differences in visual cues associated with the sounds of various languages.
Lead author: Núria Sebastián-Gallés
Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading
Though people born deaf are better able to use information from peripheral vision than those who can hear, they have a harder time learning to read. Researchers have proposed that the extra information coming in could distract from, rather than enhance, the process of reading. But no research has actually compared visual attention in reading between hearing and deaf readers.  In a new study, Nathalie Bélanger of the University of California, San Diego and colleagues investigated this issue by measuring the perceptual span, or the number of letter spaces used when reading, of skilled deaf readers, less-skilled deaf readers, and hearing readers. The experimenters manipulated the number of letter spaces that the participants saw while reading text on a screen. They found that, compared to the other two groups, skilled deaf readers read fastest when they were given the largest number of letter spaces, showing that they had the largest perceptual span. Regardless, they were able to read just as fast as skilled hearing readers. Contrary to previous hypotheses, these findings suggest that enhanced visual attention and perceptual span are not the cause of reading difficulties common among deaf individuals.
Lead author: Nathalie N. Bélanger

New research published in Psychological Science, a journal of the Association for Psychological Science, examines the nuanced relationship between language and different types of perception.

Bilingual Infants Can Tell Unfamiliar Languages Apart

Speaking more than one language can improve our ability to control our behavior and focus our attention, recent research has shown. But are there any advantages for bilingual children before they can speak in full sentences? We know that bilingual children can tell if a person is speaking one of their native languages or the other, even when there is no sound, by watching the speaker’s mouth for visual cues. But Núria Sebastián-Gallés of Universitat Pompeu Fabra and colleagues wanted to know whether bilingual infants could also do this with two unfamiliar languages. They studied 8-month-old infants, half of whom lived in either Spanish- or Catalan-speaking households and half of whom lived in Spanish-Catalan bilingual households. The researchers looked at whether the infants could discriminate between English and French, two unfamiliar languages, using only visual cues. They found that the bilingual infants could tell the difference between the two languages, while the infants who lived in single-language households could not. These findings suggest that infants who are immersed in bilingual environments are more sensitive to the differences in visual cues associated with the sounds of various languages.

Lead author: Núria Sebastián-Gallés

Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading

Though people born deaf are better able to use information from peripheral vision than those who can hear, they have a harder time learning to read. Researchers have proposed that the extra information coming in could distract from, rather than enhance, the process of reading. But no research has actually compared visual attention in reading between hearing and deaf readers.  In a new study, Nathalie Bélanger of the University of California, San Diego and colleagues investigated this issue by measuring the perceptual span, or the number of letter spaces used when reading, of skilled deaf readers, less-skilled deaf readers, and hearing readers. The experimenters manipulated the number of letter spaces that the participants saw while reading text on a screen. They found that, compared to the other two groups, skilled deaf readers read fastest when they were given the largest number of letter spaces, showing that they had the largest perceptual span. Regardless, they were able to read just as fast as skilled hearing readers. Contrary to previous hypotheses, these findings suggest that enhanced visual attention and perceptual span are not the cause of reading difficulties common among deaf individuals.

Lead author: Nathalie N. Bélanger

Filed under brain language auditory perception deafness psychology neuroscience science

46 notes

How do language families evolve over many thousands of years? How stable over time are structural features of languages? Dan Dediu and Stephen Levinson from the Max Planck Institute for Psycholingustics in Nijmegen introduced a new method using Bayesian phylogenetic approaches to analyse the evolution of structural features in more than 50 language families. Their paper ‘Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages’ was published online on September 20, 2012 in PLoS ONE.

How do language families evolve over many thousands of years? How stable over time are structural features of languages? Dan Dediu and Stephen Levinson from the Max Planck Institute for Psycholingustics in Nijmegen introduced a new method using Bayesian phylogenetic approaches to analyse the evolution of structural features in more than 50 language families. Their paper ‘Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages’ was published online on September 20, 2012 in PLoS ONE.

Filed under brain language evolution linguistics phylogeny neuroscience psychology science

26 notes

Dyslexia Impairs Speech Recognition but Can Spare Phonological Competence
Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.

Dyslexia Impairs Speech Recognition but Can Spare Phonological Competence

Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.

Filed under brain dyslexia language speech speech processing neuroscience psychology science

111 notes

Theory: Music underlies language acquisition

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.

“Infants listen first to sounds of language and only later to its meaning,” Brandt said. He noted that newborns’ extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language – “the most musical aspects of speech.”

The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.

The authors define music as “creative play with sound.” They said the term “music” implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as “an intentional and often repetitive vocal performance,” Brandt said. “They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later.”

Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. “We show that music and language develop along similar time lines,” he said.

Infants initially don’t distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don’t distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.

The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed — a feature of musical hearing that has often been overlooked, Brandt said.

“You can’t distinguish between a piano and a trumpet if you can’t process what you’re hearing at the same speed that you listen for the difference between ‘ba’ and ‘da,’” he said. “In this and many other ways, listening to music and speech overlap.” The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.

“While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child’s view,” Brandt said. “We conclude that music merits a central place in our understanding of human development.”

Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. “A lot of people with language deficits also have musical deficits,” Brandt said.

More research could also shed light on rehabilitation for people who have suffered a stroke. “Music helps them reacquire language, because that may be how they acquired language in the first place,” Brandt said.

(Source: news.rice.edu)

Filed under brain music language acquisition language neuroscience psychology science

43 notes

Babies’ ability to detect complex rules in language outshines that of adults
New research examining auditory mechanisms of language learning in babies has revealed that infants as young as three months of age are able to automatically detect and learn complex dependencies between syllables in spoken language. By contrast, adults only recognized the same dependencies when asked to actively search for them. The study by scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig also highlights the important role of basic pitch discrimination abilities for early language development.

Babies’ ability to detect complex rules in language outshines that of adults

New research examining auditory mechanisms of language learning in babies has revealed that infants as young as three months of age are able to automatically detect and learn complex dependencies between syllables in spoken language. By contrast, adults only recognized the same dependencies when asked to actively search for them. The study by scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig also highlights the important role of basic pitch discrimination abilities for early language development.

Filed under brain language language development linguistics neuroscience psychology learning science

162 notes

Languages are extremely diverse, but they are not arbitrary. Behind the bewildering, contradictory ways in which different tongues conceptualise the world, we can sometimes discern order. Linguists have traditionally assumed that this reflects the hardwired linguistic aptitude of the human brain. Yet recent scientific studies propose that language “universals” aren’t simply prescribed by genes but that they arise from the interaction between the biology of human perception and the bustle, exchange and negotiation of human culture.
Language has a logical job to do—to convey information—and yet it is riddled with irrationality: irregular verbs, random genders, silent vowels, ambiguous homophones. You’d think languages would evolve towards an optimal state of concision, but instead they accumulate quirks that hinder learning, not only for foreigners but also for native speakers.
These peculiarities have been explained by linguists by reference to the history of the people who speak it. That’s often fascinating, but it does not yield general principles about how languages have developed—or how they will change in future. As they evolve, what guides their form?
Read more

Languages are extremely diverse, but they are not arbitrary. Behind the bewildering, contradictory ways in which different tongues conceptualise the world, we can sometimes discern order. Linguists have traditionally assumed that this reflects the hardwired linguistic aptitude of the human brain. Yet recent scientific studies propose that language “universals” aren’t simply prescribed by genes but that they arise from the interaction between the biology of human perception and the bustle, exchange and negotiation of human culture.

Language has a logical job to do—to convey information—and yet it is riddled with irrationality: irregular verbs, random genders, silent vowels, ambiguous homophones. You’d think languages would evolve towards an optimal state of concision, but instead they accumulate quirks that hinder learning, not only for foreigners but also for native speakers.

These peculiarities have been explained by linguists by reference to the history of the people who speak it. That’s often fascinating, but it does not yield general principles about how languages have developed—or how they will change in future. As they evolve, what guides their form?

Read more

Filed under neuroscience psychology brain language linguistics language development science

24 notes


Baby songbirds learn to sing by imitation, just as human babies do. So researchers at Harvard and Utrecht University, in the Netherlands, have been studying the brains of zebra finches—red-beaked, white-breasted songbirds—for clues to how young birds and human infants learn vocalization on a neuronal level.
While a baby bird mimicking the chirps of his “tutor” may seem far removed from human learning, the researchers at the two universities found that the songs of the birds and human language are both processed in similar areas on the left sides of the two very different brains. The discovery was published last month in the Proceedings of the National Academy of Sciences.

Baby songbirds learn to sing by imitation, just as human babies do. So researchers at Harvard and Utrecht University, in the Netherlands, have been studying the brains of zebra finches—red-beaked, white-breasted songbirds—for clues to how young birds and human infants learn vocalization on a neuronal level.

While a baby bird mimicking the chirps of his “tutor” may seem far removed from human learning, the researchers at the two universities found that the songs of the birds and human language are both processed in similar areas on the left sides of the two very different brains. The discovery was published last month in the Proceedings of the National Academy of Sciences.

Filed under birds evolution neuroscience science lateralization vocalization language brain birdsong

free counters