Posts tagged speech production

Posts tagged speech production
For the first time, neuroscientists were able to find out how different thoughts are reflected in neuronal activity during natural conversations. Johanna Derix, Olga Iljina and the interdisciplinary team of Dr. Tonio Ball from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and the Epilepsy Center of the University Medical Center Freiburg (Freiburg, Germany) report on the link between speech, thoughts and brain responses in a special issue of Frontiers in Human Neuroscience.
"Thoughts are difficult to investigate, as one cannot observe in a direct manner what the person is thinking about. Language, however, reflects the underlying mental processes, so we can perform linguistic analyses of the subjects’ speech and use such information as a "bridge" between the neuronal processes and the subject’s thoughts," explains neuroscientist Johanna Derix.
The novelty of the authors’ approach is that the participants were not instructed to think and talk about a given topic in an experimental setting. Instead, the researchers analysed everyday conversations and the underlying brain activity, which was recorded directly from the cortical surface. This study was possible owing to the help of epilepsy patients in whom recordings of neural activity had to be obtained over several days for the purpose of pre-neurosurgical diagnostics.
For a start, borders between individual thoughts in continuous conversations had to be identified. Earlier psycholinguistic research indicates that a simple sentence is a suitable unit to contain a single thought, so the researchers opted for linguistic segmentation into simple sentences. The resulting “idea” units were classified into different categories. These included, for example, whether or not a sentence expressed memory- or self-related content. Then, the researchers analysed content-specific neural responses and observed clearly visible patterns of brain activity.
Thus, the neuroscientists from Freiburg have demonstrated the feasibility of their innovative approach to investigate, via speech, how the human brain processes thoughts during real-life conditions.
With right-handed people, it is positioned in the left side of the brain; left-handed people have it (usually) in the right side: the location of speech production has been known for quite some time. But it is not that simple, states psychologist Gesa Hartwigsen, Professor at Kiel University. In her current scientific publication, published in the magazine Proceedings of the National Academy of Science of the USA (PNAS), she investigates which areas in the brain really are in charge of speech, and how these interact. Her findings are supposed to help patients who have speech production problems or aphasia following a stroke.
Comprehending & Speaking
Gesa Hartwigsen and her team started by analysing speech production. They let healthy right-handed test persons listen to words, which they should then repeat. “These were pseudo words such as `beudo`. In German, they don’t have any associated meaning. Therefore, when hearing and repeating these words, no areas of the brain that had a connection to the meaning of what had been heard were activated”, said Hartwigsen.
The psychologist applies a combination of non-invasive methods (fMRI– functional magnetic resonance imaging and TMS – transcranial magnetic stimulation) to deduce what happens in the brain during the test. “We thus proved that the left hemisphere, as expected, was activated during speech production, while the right hemisphere did not actively contribute to language function”, explains Hartwigsen. This is the regular functionality within a healthy brain. From these results as well as others, scientists had up to now deduced that the right hemisphere did not contribute to speech production in the healthy system and was therefore suppressed.
Interfering & Measuring
With a second test, the Kiel University scientists simulated a dysfunction in the brain comparable to a stroke. A magnetic coil transmits a current pulse that interrupts the function of the area responsible for producing speech (Broca’s Area) in the left hemisphere. This completely harmless method influences the speech production of the volunteers for about 30 to 45 minutes. “During this period, the ability to listen and repeat was tested again. While we observed a suppressed activity in the left hemisphere during repeating, with some test persons taking longer to repeat the pseudo words, we also found unexpected activities in the right hemisphere”, reports Hartwigsen.
The right hemisphere showed increased activity during pseudo word repetition. The more the activity in the right Borca’s Area increased, the faster the volunteers were able to solve their speech tests. The right hemisphere also increased its facilitatory influence on the right hemisphere, a finding that was not observed prior to the TMS-induced lesion. “This reaction lends further support to the notion that the right hemisphere area reacts to the dysfunction of the left hemisphere and tries to compensate for the lesion.” Does the right hemisphere have a supporting influence and does it play an active role in speech production? So far, the common opinion was that it does not.
Result & Outlook
The findings of Gesa Hartwigsen and her team show an interaction of both hemispheres during speech repetition. When the left hemisphere is suppressed for example by a stroke, the right hemisphere could actively facilitate speech production. “By stimulating the right hemisphere, it could be possible to support speech recovery”, speculates the scientist. Here, timing would be very important. “Right after a stroke, we could support the right hemisphere. But when the remaining areas of the left hemisphere are ready to do their work again, it might be more helpful if the right hemisphere was suppressed. During this phase, we could stimulate the left hemisphere instead. The correct timing can therefore be crucial for recovery of speech after a stroke.”
In collaboration with the Department of Neurology at Kiel University, a stroke specialist from Leipzig and doctoral students of Medicine and Psychology, Gesa Hartwigsen has started a follow-up study on the recent publication. “We would like to find out more about the collaboration of the hemispheres and the right timing in helping stroke patients to recover”, says Hartwigsen. Her field of research is fairly new within the cognitive neuroscience. Nevertheless, she is positive that it will offer practical help in the form of concrete therapies within the next ten to fifteen years.
University of Tübingen neuroscientists show that monkeys can decide to call out or keep silent

“Should I say something or not?” Human beings are not alone in pondering this dilemma – animals also face decisions when they communicate by voice. University of Tübingen neurobiologists Dr. Steffen Hage and Professor Andreas Nieder have now demonstrated that nerve cells in the brain signal the targeted initiation of calls – forming the basis of voluntary vocal expression. Their results are published in “Nature Communications.”
When we speak, we use the sounds we make for a specific purpose – we intentionally say what we think, or consciously withhold information. Animals, however, usually make sounds according to what they feel at that moment. Even our closest relations among the primates make sounds as a reflex based on their mood. Now, Tübingen neuroscientists have shown that rhesus monkeys are able to call (or be silent) on command. They can instrumentalize the sounds they make in a targeted way, an important behavioral ability which we also use to put language to a purpose.
To find out how the neural cells in the brain catalyse the production of controled vocal noises, the researchers taught rhesus monkeys to call out quickly when a spot appeared on a computer screen. While the monkeys solved puzzles, measurements taken in their prefrontal cortex revealed astonishing reactions in the cells there. The nerve cells became active whenever the monkey saw the spot of light which was the instruction to call out. But if the monkey simply called out spontaneously, these nerve cells were not activated. The cells therefore did not signaled for just any vocalisation – only for calls that the monkey actively decided to make.
The results published in “Nature Communications” provide valuable insights into the neurobiological foundations of vocalization. “We want to understand the physiological mechanisms in the brain which lead to the voluntary production of calls,” says Dr. Steffen Hage of the Institute for Neurobiology, “because it played a key role in the evolution of human ability to use speech.” The study offers important indicators of the function of part of the brain which in humans has developed into one of the central locations for controlling speech. “Disorders in this part of the human brain lead to severe speech disorders or even complete loss of speech in the patient,” Professor Andreas Nieder explains. The results – giving insights into how the production of sound is initiated – may help us better understand speech disorders.
(Source: uni-tuebingen.de)
Songbirds’ brains coordinate singing with intricate timing
As a bird sings, some neurons in its brain prepare to make the next sounds while others are synchronized with the current notes—a coordination of physical actions and brain activity that is needed to produce complex movements, new research at the University of Chicago shows.
In an article in the current issue of Nature, neuroscientist Daniel Margoliash and colleagues show, for the first time, how the brain is organized to govern skilled performance—a finding that may lead to new ways of understanding human speech production.
The new study shows that birds’ physical movements actually are made up of a multitude of smaller actions. “It is amazing that such small units of movements are encoded, and so precisely, at the level of the forebrain,” said Margoliash, a professor of organismal biology and anatomy and psychology at UChicago.
“This work provides new insight into how the physics of controlling vocal signals are represented in the brain to control vocalizations,” said Howard Nusbaum, a professor of psychology at UChicago and an expert on speech.
By decoding the neural representation of communication, Nusbaum explained, the research may shed light on speech problems such as stuttering or aphasia (a disorder following a stroke). And it offers an unusual window into how the brain and body carry out other kinds of complex movement, from throwing a ball to doing a backflip.
“A big question in muscle control is how the motor system organizes the dynamics of movement,” said Margoliash. Movements like reaching or grasping are difficult to study because they entail many variables, such as the angles of the shoulder, elbow, wrist and fingers; the forces of many muscles; and how these change over time,” he said.
"With all this complexity, it has been difficult to determine which of the many variables that describe movements are represented in the brain, and which of those are used to control movements," he said.
"It’s difficult to find a natural framework with which to analyze the activity of single neurons. The bird study provided us a perfect opportunity,” Margoliash said. Margoliash is a pioneer in the study of brain function in birds, with studies that include how learning occurs when a bird sleeps and recalls singing a song.
The great orchestral work of speech
What goes on inside our heads is similar to an orchestra. For Peter Hagoort, Director at the Max Planck Institute for Psycholinguistics, this image is a very apt one for explaining how speech arises in the human brain. “There are different orchestra members and different instruments, all playing in time with each other, and sounding perfect together.”
When we speak, we transform our thoughts into a linear sequence of sounds. When we understand language, exactly the opposite occurs: we deduce an interpretation from the speech sounds we hear. Closely connected regions of the brain – like the Broca’s area and Wernicke’s area – are involved in both processes, and these form the neurobiological basis of our capacity for language.
The 58-year-old scientist, who has had a strong interest in language and literature since his youth, has been searching for the neurobiological foundations of our communication since the 1990s. Using imaging processes, he observes the brain “in action” and tries to find out how this complex organ controls the way we speak and understand speech.
Hagoort is one of the first researchers to combine psychological theories with neuroscientific methods in his efforts to understand this complex interaction. Because this is not possible without the very latest technology, in 1999, Hagoort established the Nijmegen-based Donders Centre for Cognitive Neuroimaging where an interdisciplinary team of researchers uses state-of-the-art technology, for example MRI and PET scanners, to find out how the brain succeeds in combining functions like memory, speech, observation, attention, feelings and consciousness.
The Dutch scientist is particularly fascinated by the temporal sequence of speech. He discovered, for example, that the brain begins by collecting grammatical information about a word before it compiles information about its sound. This first reliable real-time measurement of speech production in the brain provided researchers with a basis for observing speakers in the act of speaking. They were then able to obtain new insights about why the complex orchestral work of language is impaired, for example, after strokes and in the case of disorders like dyslexia and autism.
“Language is an essential component of human culture, which distinguishes us from other species,” says Hagoort. “Young children understand language before they even start to speak. They master complex grammatical structures before they can add 3 and 13. Our brain is tuned for language at a very early stage,” stresses Hagoort, referring to research findings. The exact composition of the orchestra in our heads and the nature of the score on which the process of speech is based are topics which Hagoort continues to research.
Training speech networks to treat aphasia
About 80,000 people develop aphasia each year in the United States alone. Nearly all of these individuals have difficulty speaking. For example, some patients (nonfluent aphasics) have trouble producing sounds clearly, making it frustrating for them to speak and difficult for them to be understood. Other patients (fluent aphasics) may select the wrong sound in a word or mix up the order of the sounds. In the latter case, “kitchen” can become “chicken.” Blumstein’s idea is to use guided speech to help people who have suffered stroke-related brain damage to rebuild their neural speech infrastructure.
Blumstein has been studying aphasia and the neural basis of language her whole career. She uses brain imaging, acoustic analysis, and other lab-based techniques to study how the brain maps sound to meaning and meaning to sound.
What Blumstein and other scientists believe is that the brain organizes words into networks, linked both by similarity of meaning and similarity of sound. To say “pear,” a speaker will also activate other competing words like “apple” (which competes in meaning) and “bear”(which competes in sound). Despite this competition, normal speakers are able to select the correct word.
In a study published in the Journal of Cognitive Neuroscience in 2010, for example, she and her co-authors used functional magnetic resonance imaging to track neural activation patterns in the brains of 18 healthy volunteers as they spoke English words that had similar sounding “competitors” (“cape” and “gape” differ subtly in the first consonant by voicing, i.e. the timing of the onset of vocal cord vibration). Volunteers also spoke words without similar sounding competitors (“cake” has no voiced competitor in English; gake is not a word). What the researchers found is that neural activation within a network of brain regions was modulated differently when subjects said words that had competitors versus words that did not.
One way this competition-mediated difference is apparent in speech production is that words with competitors are produced differently from words that do not have competitors. For example, the voicing of the “t” in “tot” (with a voiced competitor ‘dot’) is produced with more voicing than the “t” in “top” (there is no ‘dop’ in English). Through acoustic analysis of the speech of people with aphasia, Blumstein has shown that this difference persists, suggesting that their word networks are still largely intact.
New Technique Helps Stroke Victims Communicate
Stroke victims affected with loss of speech caused by Broca’s aphasia have been shown to speak fluidly through the use of a process called “speech entrainment” developed by researchers at the University of South Carolina’s Arnold School of Public Health.
Aphasia, a severe communication problem caused by damage to the brain’s left hemisphere and characterized by halting speech, occurs in about one-third of people who have a stroke and affects personal and professional relationships. Using the speech entrainment technique, which involves mimicking other, patients showed significant improvement in their ability to speak.
The results of the study are published in a recent issue of the neurology journal Brain.
"This is the first time that we have seen people with Broca’s aphasia speak in fluent sentences,” said Julius Fridriksson, the study’s lead researcher and a professor with the Department of Communication Sciences and Disorders at the Arnold School. “It is a small study that gives us an understanding of how the brain functions after a stroke, and it offers hope for thousands of people who suffer strokes each year."
In Fridriksson’s study, 13 patients completed three separate behavioral tasks that were used to understand the effects of speech entrainment on speech production. During the “speech entrainment–audio visual" portion of the study, participants attempted to mimic a speaker in real-time whose mouth was made visible on the 3.5-inch screen of an iPod Touch and whose speech was heard via headphones.
The “speech entrainment–audio only” condition involved real-time mimicking speech presented via headphones with the screen of the iPod blank. During a spontaneous speech condition, patients spoke about a given topic without external aid.
Each patient also completed a three-week training phase where they practiced speech every day with the aid of speech entrainment. Overall, the training resulted in improved spontaneous speech production, something that is relatively rare in this population. Ultimately the patients were able to produce a short script about their stroke to tell to other people.
Neuroimaging results from the patient subjects have also given Fridriksson and his research team a greater understanding of the mechanism involved in speech entrainment.
"Preliminary results suggest that training with speech entrainment improves speech production in Broca’s aphasia, providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment," Fridriksson said.
Mu-rhythm in the brain: The neural mechanism of speech as an audio-vocal perception-action system
Speech production is one of the most important components in human communication. However, the cortical mechanisms governing speech are not well understood because it is extremely challenging to measure the activity of the brain in action, that is, during speech production.
Now, Takeshi Tamura and Michiteru Kitazaki at Toyohashi University of Technology, Atsuko Gunji and her colleagues at National Institute of Mental Health, Hiroshige Takeichi at RIKEN, and Hiroaki Shigemasu at Kochi University of Technology have found modulation of mu-rhythms in the cortex related to speech production.
The researchers measured EEG (electroencephalogram) with pre-amplified electrodes during simulated vocalization, simulated vocalization with delayed auditory feedback, simulated vocalization under loud noise, and silent reading. The authors define ‘mu-rhythm’ as a decrease of power in 8-16Hz EEG during the task period.
The mu-rhythm at the sensory-motor cortical area was not only observed under all simulated vocalization conditions, but was also found to be boosted by the delayed feedback and attenuated by loud noises. Since these auditory interferences influence speech production, it supports the premise that audio-vocal monitoring systems play an important role in speech production. The motor-related mu-rhythm is a critical index to clarify neural mechanisms of speech production as an audio-vocal perception-action system.
In the future, a neurofeedback method based on monitoring mu-rhythm at the sensory-motor cortex may facilitate rehabilitation of speech-related deficits.