Posts tagged language production
Roots of language in human and bird biology
The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.
These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.
"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.
Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.
From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.
Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE
Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language
(Image: iStock)
Filed under language language production speech vocalizations songbirds vocal learning neuroscience science
Linguistics as a Window to Understanding the Brain
How did humans acquire language? In this lecture, best-selling author Steven Pinker introduces you to linguistics, the evolution of spoken language, and the debate over the existence of an innate universal grammar.
He also explores why language is such a fundamental part of social relationships, human biology, and human evolution.
Finally, Pinker touches on the wide variety of applications for linguistics, from improving how we teach reading and writing to how we interpret law, politics, and literature.
Filed under Steven Pinker linguistics language language acquisition language production communication evolution psychology neuroscience science
This is Your Brain on Freestyle Rap: NIDCD Study Reveals Characteristic Brain Patterns of Lyrical Improvisation
Researchers in the voice, speech, and language branch of the National Institute on Deafness and Other Communication Disorders (NIDCD) at the National Institutes of Health (NIH) have used functional magnetic resonance imaging to study the brain activity of rappers when they are “freestyling”—spontaneously improvising lyrics in real time. The findings, published online in the November 15 issue of the journal Scientific Reports, reveal that this form of vocal improvisation is associated with a unique functional reallocation of brain activity in the prefrontal cortex and proposes a novel neural network that appears to be intimately involved in improvisatory and creative endeavors.
The researchers, led by Siyuan Liu, Ph.D., scanned the brains of 12 freestyle rap artists (who had at least 5 years of rapping experience) while they performed two tasks using an identical 8-bar musical track. For the first task, they improvised rhyming lyrics and rhythmic patterns guided only by the beat. In the second task, they performed a well-rehearsed set of lyrics.
During freestyle rapping, the researchers observed increases in brain activity in the medial prefrontal cortex, a brain region responsible for motivation of thought and action, but decreased activity in dorsolateral prefrontal regions that normally play a supervisory or monitoring role. Like an experienced parent who knows when to lay down the law and when to look the other way, these shifts in brain function may facilitate the free expression of thoughts and words without the usual neural constraints.
Freestyling also increased brain activity in the perisylvian system (involved in language production), the amygdala (an area of the brain linked to emotion), and cingulate motor areas, suggesting that improvisation engages a brain network that links motivation, language, mood, and action. Further studies of this network in other art forms that involve the innovative use of language, such as poetry and storytelling, could offer more insights into the initial, improvisatory phase of the creative process.
Filed under brain brain activity rhythmic patterns language production MRI neuroscience psychology science
23 July 2012 by Will Heaven
Watch where you look – it can be used to predict what you’ll say. A new study shows that it is possible to guess what sentences people will use to describe a scene by tracking their eye movements.
Moreno Coco and Frank Keller at the University of Edinburgh, UK, presented 24 volunteers with a series of photo-realistic images depicting indoor scenes such as a hotel reception. They then tracked the sequence of objects that each volunteer looked at after being asked to describe what they saw.
Other than being prompted with a keyword, such as “man” or “suitcase”, participants were free to describe the scene however they liked. Some typical sentences included “the man is standing in the reception of a hotel” or “the suitcase is on the floor”.
The order in which a participant’s gaze settled on objects in each scene tended to mirror the order of nouns in the sentence used to describe it. “We were surprised there was such a close correlation,” says Keller. Given that multiple cognitive processes are involved in sentence formation, Coco says “it is remarkable to find evidence of similarity between speech and visual attention”.
Word prediction
The team used the discovery to see if they could predict what sentences would be used to describe a scene based on eye movement alone. They developed an algorithm that was able to use the eye gazes recorded from the previous experiment to predict the correct sentence from a choice of 576 descriptions.
Changsong Liu of Michigan State University’s Language and Interaction Research lab, in East Lansing, who was not involved in the study, suggests these results could motivate novel designs for human-machine interfaces that take advantage of visual cues to improve speech recognition software.
Gaze information is already used to help with disambiguation. For example, if a speech recognition system can tell that you are looking at a tree, it is less likely to guess that you just said “three”. Sentence prediction, perhaps in combination with augmented reality headsets that track eye movement, for example, is one possible application.
Coco and Keller are now looking into the role of coordinated visual and linguistic processes in conversations between two people. “People engaged in a dialogue use similar syntactic forms, expressions and eye-movements,” says Coco. One hypothesis is that such “coordinative mimicry” might be important for joint decision-making.
Source: NewScientist
Filed under science neuroscience brain psychology eye movements language production speech scene understanding