Neuroscience

Articles and news from the latest research reports.

Posts tagged speech

118 notes

Secrets of Human Speech Uncovered
A team of researchers at UC San Francisco has uncovered the neurological basis of speech motor control, the complex coordinated activity of tiny brain regions that controls our lips, jaw, tongue and larynx as we speak.
Described this week in the journal Nature, the work has potential implications for developing computer-brain interfaces for artificial speech communication and for the treatment of speech disorders. It also sheds light on an ability that is unique to humans among living creatures but poorly understood.
“Speaking is so fundamental to who we are as humans – nearly all of us learn to speak,” said senior author Edward Chang, MD, a neurosurgeon at the UCSF Epilepsy Center and a faculty member in the UCSF Center for Integrative Neuroscience. “But it’s probably the most complex motor activity we do.”
The complexity comes from the fact that spoken words require the coordinated efforts of numerous “articulators” in the vocal tract – the lips, tongue, jaw and larynx – but scientists have not understood how the movements of these distinct articulators are precisely coordinated in the brain.
To understand how speech articulation works, Chang and his colleagues recorded electrical activity directly from the brains of three people undergoing brain surgery at UCSF, and used this information to determine the spatial organization of the “speech sensorimotor cortex,” which controls the lips, tongue, jaw, larynx as a person speaks. This gave them a map of which parts of the brain control which parts of the vocal tract.
They then applied a sophisticated new method called “state-space” analysis to observe the complex spatial and temporal patterns of neural activity in the speech sensorimotor cortex that play out as someone speaks. This revealed a surprising sophistication in how the brain’s speech sensorimotor cortex works.
They found that this cortical area has a hierarchical and cyclical structure that exerts a split-second, symphony-like control over the tongue, jaw, larynx and lips.
“These properties may reflect cortical strategies to greatly simplify the complex coordination of articulators in fluent speech,” said Kristofer Bouchard, PhD, a postdoctoral fellow in the Chang lab who was the first author on the paper.
In the same way that a symphony relies upon all the players to coordinate their plucks, beats or blows to make music, speaking demands well-timed action of several various brain regions within the speech sensorimotor cortex.
Brain Mapping in Epilepsy Surgery
The patients involved in the study were all at UCSF undergoing surgery for severe, untreatable epilepsy. Brain surgery is a powerful way to halt epilepsy in its tracks, potentially completely stopping seizures overnight, and its success is directly related to the accuracy with which a medical team can map the brain, identifying the exact pieces of tissue responsible for an individual’s seizures and removing them.
The UCSF Comprehensive Epilepsy Center is a leader in the use of advanced intracranial monitoring to map out elusive seizure-causing brain regions. The mapping is done by surgically implanting an electrode array under the skull on the brain’s outer surface or cortex and recording the brain’s activity in order to pinpoint the parts of the brain responsible for disabling seizures. In a second surgery a few weeks later, the electrodes are removed and the unhealthy brain tissue that causes the seizures is removed.
This setting also permits a rare opportunity to ask basic questions about how the human brain works, such as how it controls speaking. The neurological basis of speech motor control has remained unknown until now because scientists cannot study speech mechanisms in animals and because non-invasive imaging methods lack the ability to resolve the very rapid time course of articulator movements, which change in hundredths of seconds.
But surgical brain mapping can record neural activity directly and faster than other noninvasive methods, showing changes in electrical activity on the order of a few milliseconds.
Prior to this work, the majority of what scientists knew about this brain region was based on studies from the 1940’s, which used electrical stimulation of single spots on the brain, causing a twitch in muscles of the face or throat. This approach using focal stimulation, however, could never evoke a meaningful speech sound. 
Chang and colleagues used an entirely different approach to studying the brain activity during natural speaking brain using the implanted electrodes arrays. The patients read from a list of English syllables – like bah, dee, goo. The researchers recorded the electrical activity within their speech-motor cortex and showed how distinct brain patterning accounts for different vowels and consonants in our speech.
“Even though we used English, we found the key patterns observed were ones that linguists have observed in languages around the world – perhaps suggesting universal principles for speaking across all cultures,” said Chang.

Secrets of Human Speech Uncovered

A team of researchers at UC San Francisco has uncovered the neurological basis of speech motor control, the complex coordinated activity of tiny brain regions that controls our lips, jaw, tongue and larynx as we speak.

Described this week in the journal Nature, the work has potential implications for developing computer-brain interfaces for artificial speech communication and for the treatment of speech disorders. It also sheds light on an ability that is unique to humans among living creatures but poorly understood.

“Speaking is so fundamental to who we are as humans – nearly all of us learn to speak,” said senior author Edward Chang, MD, a neurosurgeon at the UCSF Epilepsy Center and a faculty member in the UCSF Center for Integrative Neuroscience. “But it’s probably the most complex motor activity we do.”

The complexity comes from the fact that spoken words require the coordinated efforts of numerous “articulators” in the vocal tract – the lips, tongue, jaw and larynx – but scientists have not understood how the movements of these distinct articulators are precisely coordinated in the brain.

To understand how speech articulation works, Chang and his colleagues recorded electrical activity directly from the brains of three people undergoing brain surgery at UCSF, and used this information to determine the spatial organization of the “speech sensorimotor cortex,” which controls the lips, tongue, jaw, larynx as a person speaks. This gave them a map of which parts of the brain control which parts of the vocal tract.

They then applied a sophisticated new method called “state-space” analysis to observe the complex spatial and temporal patterns of neural activity in the speech sensorimotor cortex that play out as someone speaks. This revealed a surprising sophistication in how the brain’s speech sensorimotor cortex works.

They found that this cortical area has a hierarchical and cyclical structure that exerts a split-second, symphony-like control over the tongue, jaw, larynx and lips.

“These properties may reflect cortical strategies to greatly simplify the complex coordination of articulators in fluent speech,” said Kristofer Bouchard, PhD, a postdoctoral fellow in the Chang lab who was the first author on the paper.

In the same way that a symphony relies upon all the players to coordinate their plucks, beats or blows to make music, speaking demands well-timed action of several various brain regions within the speech sensorimotor cortex.

Brain Mapping in Epilepsy Surgery

The patients involved in the study were all at UCSF undergoing surgery for severe, untreatable epilepsy. Brain surgery is a powerful way to halt epilepsy in its tracks, potentially completely stopping seizures overnight, and its success is directly related to the accuracy with which a medical team can map the brain, identifying the exact pieces of tissue responsible for an individual’s seizures and removing them.

The UCSF Comprehensive Epilepsy Center is a leader in the use of advanced intracranial monitoring to map out elusive seizure-causing brain regions. The mapping is done by surgically implanting an electrode array under the skull on the brain’s outer surface or cortex and recording the brain’s activity in order to pinpoint the parts of the brain responsible for disabling seizures. In a second surgery a few weeks later, the electrodes are removed and the unhealthy brain tissue that causes the seizures is removed.

This setting also permits a rare opportunity to ask basic questions about how the human brain works, such as how it controls speaking. The neurological basis of speech motor control has remained unknown until now because scientists cannot study speech mechanisms in animals and because non-invasive imaging methods lack the ability to resolve the very rapid time course of articulator movements, which change in hundredths of seconds.

But surgical brain mapping can record neural activity directly and faster than other noninvasive methods, showing changes in electrical activity on the order of a few milliseconds.

Prior to this work, the majority of what scientists knew about this brain region was based on studies from the 1940’s, which used electrical stimulation of single spots on the brain, causing a twitch in muscles of the face or throat. This approach using focal stimulation, however, could never evoke a meaningful speech sound. 

Chang and colleagues used an entirely different approach to studying the brain activity during natural speaking brain using the implanted electrodes arrays. The patients read from a list of English syllables – like bah, dee, goo. The researchers recorded the electrical activity within their speech-motor cortex and showed how distinct brain patterning accounts for different vowels and consonants in our speech.

“Even though we used English, we found the key patterns observed were ones that linguists have observed in languages around the world – perhaps suggesting universal principles for speaking across all cultures,” said Chang.

Filed under vocal tract speech speech articulation sensorimotor cortex neuroscience science

102 notes

Roots of language in human and bird biology
The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.
These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.
"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.
Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.
From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.
Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE
Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language
(Image: iStock)

Roots of language in human and bird biology

The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.

These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.

"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.

Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.

From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.

Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE

Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language

(Image: iStock)

Filed under language language production speech vocalizations songbirds vocal learning neuroscience science

76 notes

Banded mongooses structure monosyllabic sounds in a similar way to humans
Animals are more eloquent than previously assumed. Even the monosyllabic call of the banded mongoose is structured and thus comparable with the vowel and consonant system of human speech. Behavioral biologists from the University of Zurich have thus become the first to demonstrate that animals communicate with even smaller sound units than syllables.
When humans speak, they structure individual syllables with the aid of vowels and consonants. Due to their anatomy, animals can only produce a limited number of distinguishable sounds and calls. Complex animal sound expressions such as whale and bird songs are formed because smaller sound units – so-called “syllables” or “phonocodes” – are repeatedly combined into new arrangements. However, it was previously assumed that monosyllabic sound expressions such as contact or alarm calls do not have any combinational structures. Behavioral biologist Marta Manser and her doctoral student David Jansen from the University of Zurich have now proved that the monosyllabic calls of banded mongooses are structured and contain different information. They thus demonstrate for the first time that animals also have a sound expression structure that bears a certain similarity to the vowel and consonant system of human speech.
David A.W.A.M. Jansen, Michael A. Cant, and Marta B. Manser. Segmental concatenation of individual signatures and context cues in banded mongoose (Mungos mungo) close calls. BMC Biology

Banded mongooses structure monosyllabic sounds in a similar way to humans

Animals are more eloquent than previously assumed. Even the monosyllabic call of the banded mongoose is structured and thus comparable with the vowel and consonant system of human speech. Behavioral biologists from the University of Zurich have thus become the first to demonstrate that animals communicate with even smaller sound units than syllables.

When humans speak, they structure individual syllables with the aid of vowels and consonants. Due to their anatomy, animals can only produce a limited number of distinguishable sounds and calls. Complex animal sound expressions such as whale and bird songs are formed because smaller sound units – so-called “syllables” or “phonocodes” – are repeatedly combined into new arrangements. However, it was previously assumed that monosyllabic sound expressions such as contact or alarm calls do not have any combinational structures. Behavioral biologist Marta Manser and her doctoral student David Jansen from the University of Zurich have now proved that the monosyllabic calls of banded mongooses are structured and contain different information. They thus demonstrate for the first time that animals also have a sound expression structure that bears a certain similarity to the vowel and consonant system of human speech.

David A.W.A.M. Jansen, Michael A. Cant, and Marta B. Manser. Segmental concatenation of individual signatures and context cues in banded mongoose (Mungos mungo) close calls. BMC Biology

Filed under banded mongoose language speech animal communication science

222 notes

Pronunciation of ‘s’ sounds impacts perception of gender
A person’s style of speech — not just the pitch of his or her voice — may help determine whether the listener perceives the speaker to be male or female, according to a University of Colorado Boulder researcher who studied transgender people transitioning from female to male.
The way people pronounce their “s” sounds and the amount of resonance they use when speaking contributes to the perception of gender, according to Lal Zimman, whose findings are based on research he completed while earning his doctoral degree from CU-Boulder’s linguistics department.
Zimman presented his research on Saturday, January 5th at the 2013 annual meeting of the Linguistic Society of America in Boston.
“In the past, gender differences in the voice have been understood, primarily, as a biological difference,” Zimman said. “I really wanted to look at the potential for other factors, other than how testosterone lowers the voice, to affect how a person’s voice is perceived.”
As part of the process of transitioning from female to male, participants in Zimman’s study were treated with the hormone testosterone, which causes a number of physical changes including the lowering of a person’s voice. Zimman was interested in whether the style of a person’s speech had any impact on how low a voice needed to drop before it was perceived as male.
What he found was that a voice could have a higher pitch and still be perceived as male if the speaker pronounced “s” sounds in a lower frequency, which is achieved by moving the tongue farther away from the teeth.
“A high-frequency ‘s’ has long been stereotypically associated with women’s speech, as well as gay men’s speech, yet there is no biological correlate to this association,” said CU-Boulder linguistics and anthropology Associate Professor Kira Hall, who served as Zimman’s doctoral adviser. “The project illustrates the socio-biological complexity of pitch: the designation of a voice as more masculine or more feminine is importantly influenced by other ideologically charged speech traits that are socially, not biologically, driven.”
Vocal resonance also affected the perception of gender in Zimman’s study. A deeper resonance — which can be thought of as a voice that seems to be emanating from the chest instead of from the head — is the result of both biology and practice. Resonance is lower for people whose larynx is deeper in their throats, but people learn to manipulate the position of their larynx when they’re young, with male children pulling their larynxes down a little bit and female children pushing them up, Zimman said.
For his study, Zimman recorded the voices of 15 transgender men, all of whom live in the San Francisco Bay area. To determine the frequency of the “s” sounds each participant made, Zimman used software developed by fellow linguists. Then, to see how the “s” sounds affected perception, Zimman digitally manipulated the recording of each participant’s voice, sliding the pitch from higher to lower, and asked a group of 10 listeners to identify the gender of the speaker. Using the recordings, Zimman was able to pinpoint how low each individual’s voice had to drop before the majority of the group perceived the speaker to be male.

Pronunciation of ‘s’ sounds impacts perception of gender

A person’s style of speech — not just the pitch of his or her voice — may help determine whether the listener perceives the speaker to be male or female, according to a University of Colorado Boulder researcher who studied transgender people transitioning from female to male.

The way people pronounce their “s” sounds and the amount of resonance they use when speaking contributes to the perception of gender, according to Lal Zimman, whose findings are based on research he completed while earning his doctoral degree from CU-Boulder’s linguistics department.

Zimman presented his research on Saturday, January 5th at the 2013 annual meeting of the Linguistic Society of America in Boston.

“In the past, gender differences in the voice have been understood, primarily, as a biological difference,” Zimman said. “I really wanted to look at the potential for other factors, other than how testosterone lowers the voice, to affect how a person’s voice is perceived.”

As part of the process of transitioning from female to male, participants in Zimman’s study were treated with the hormone testosterone, which causes a number of physical changes including the lowering of a person’s voice. Zimman was interested in whether the style of a person’s speech had any impact on how low a voice needed to drop before it was perceived as male.

What he found was that a voice could have a higher pitch and still be perceived as male if the speaker pronounced “s” sounds in a lower frequency, which is achieved by moving the tongue farther away from the teeth.

“A high-frequency ‘s’ has long been stereotypically associated with women’s speech, as well as gay men’s speech, yet there is no biological correlate to this association,” said CU-Boulder linguistics and anthropology Associate Professor Kira Hall, who served as Zimman’s doctoral adviser. “The project illustrates the socio-biological complexity of pitch: the designation of a voice as more masculine or more feminine is importantly influenced by other ideologically charged speech traits that are socially, not biologically, driven.”

Vocal resonance also affected the perception of gender in Zimman’s study. A deeper resonance — which can be thought of as a voice that seems to be emanating from the chest instead of from the head — is the result of both biology and practice. Resonance is lower for people whose larynx is deeper in their throats, but people learn to manipulate the position of their larynx when they’re young, with male children pulling their larynxes down a little bit and female children pushing them up, Zimman said.

For his study, Zimman recorded the voices of 15 transgender men, all of whom live in the San Francisco Bay area. To determine the frequency of the “s” sounds each participant made, Zimman used software developed by fellow linguists. Then, to see how the “s” sounds affected perception, Zimman digitally manipulated the recording of each participant’s voice, sliding the pitch from higher to lower, and asked a group of 10 listeners to identify the gender of the speaker. Using the recordings, Zimman was able to pinpoint how low each individual’s voice had to drop before the majority of the group perceived the speaker to be male.

Filed under human voice perception of gender pitch speech linguistics resonance vocal resonance science

68 notes



New Genetic Disorder of Balance and Cognition Discovered
The family of disorders known as ataxia can impair speech, balance and coordination, and have varying levels of severity. Scientists from the Universities of Oxford and Edinburgh have identified a new member of this group of conditions which is connected to ‘Lincoln ataxia’, so called because it was first found in the relatives of US President Abraham Lincoln. The results are published in the journal PLOS Genetics.
Lincoln ataxia affects the cerebellum, a crucial part of the brain controlling movement and balance. It is caused by an alteration in the gene for ‘beta-III spectrin’, a protein found in the cerebellum. Each person has two copies of a gene, and in Lincoln ataxia there is an alteration in only one of the two copies. Unexpectedly, the British scientists have found cases of alterations in both copies of the gene, causing a novel disorder called ‘SPARCA1’ which is associated with a severe childhood ataxia and cognitive impairment.
This is the first report of any spectrin-related disorder where both copies of the gene are faulty and has given important insights into both Lincoln ataxia and SPARCA1.
The work was done using whole genome sequencing, a relatively new technology which allows all of a person’s genetics information to be analysed. In addition to sequencing work, the scientists characterized the condition using mice lacking beta-III spectrin. This analysis, combined with previous work, links the protein defect to changes in nerve-cell shape in the brain areas associated with cognition and coordinated movements. The work shows that loss of normal beta-III spectrin function underlies both SPARCA 1 and Lincoln ataxia, but a greater loss of beta-III spectrin is required before cognition problems arise.

New Genetic Disorder of Balance and Cognition Discovered

The family of disorders known as ataxia can impair speech, balance and coordination, and have varying levels of severity. Scientists from the Universities of Oxford and Edinburgh have identified a new member of this group of conditions which is connected to ‘Lincoln ataxia’, so called because it was first found in the relatives of US President Abraham Lincoln. The results are published in the journal PLOS Genetics.

Lincoln ataxia affects the cerebellum, a crucial part of the brain controlling movement and balance. It is caused by an alteration in the gene for ‘beta-III spectrin’, a protein found in the cerebellum. Each person has two copies of a gene, and in Lincoln ataxia there is an alteration in only one of the two copies. Unexpectedly, the British scientists have found cases of alterations in both copies of the gene, causing a novel disorder called ‘SPARCA1’ which is associated with a severe childhood ataxia and cognitive impairment.

This is the first report of any spectrin-related disorder where both copies of the gene are faulty and has given important insights into both Lincoln ataxia and SPARCA1.

The work was done using whole genome sequencing, a relatively new technology which allows all of a person’s genetics information to be analysed. In addition to sequencing work, the scientists characterized the condition using mice lacking beta-III spectrin. This analysis, combined with previous work, links the protein defect to changes in nerve-cell shape in the brain areas associated with cognition and coordinated movements. The work shows that loss of normal beta-III spectrin function underlies both SPARCA 1 and Lincoln ataxia, but a greater loss of beta-III spectrin is required before cognition problems arise.

Filed under speech speech impairment ataxia Lincoln ataxia balance neuroscience science

291 notes

An elephant that speaks Korean
An Asian elephant named Koshik can imitate human speech, speaking words in Korean that can be readily understood by those who know the language. The elephant accomplishes this in a most unusual way: he vocalizes with his trunk in his mouth.
The elephant’s vocabulary consists of exactly five words, researchers report on November 1 in Current Biology, a Cell Press publication. Those include “annyong” (“hello”), “anja” (“sit down”), “aniya” (“no”), “nuo” (“lie down”), and “choah” (“good”). Ultimately, Koshik’s language skills may provide important insights into the biology and evolution of complex vocal learning, an ability that is critical for human speech and music, the researchers say.
"Human speech basically has two important aspects, pitch and timbre," says Angela Stoeger of the University of Vienna. "Intriguingly, the elephant Koshik is capable of matching both pitch and timbre patterns: he accurately imitates human formants as well as the voice pitch of his trainers. This is remarkable considering the huge size, the long vocal tract, and other anatomical differences between an elephant and a human."
Read more

An elephant that speaks Korean

An Asian elephant named Koshik can imitate human speech, speaking words in Korean that can be readily understood by those who know the language. The elephant accomplishes this in a most unusual way: he vocalizes with his trunk in his mouth.

The elephant’s vocabulary consists of exactly five words, researchers report on November 1 in Current Biology, a Cell Press publication. Those include “annyong” (“hello”), “anja” (“sit down”), “aniya” (“no”), “nuo” (“lie down”), and “choah” (“good”). Ultimately, Koshik’s language skills may provide important insights into the biology and evolution of complex vocal learning, an ability that is critical for human speech and music, the researchers say.

"Human speech basically has two important aspects, pitch and timbre," says Angela Stoeger of the University of Vienna. "Intriguingly, the elephant Koshik is capable of matching both pitch and timbre patterns: he accurately imitates human formants as well as the voice pitch of his trainers. This is remarkable considering the huge size, the long vocal tract, and other anatomical differences between an elephant and a human."

Read more

Filed under animals language elephants vocalization vocal learning speech neuroscience psychology science

38 notes

New Treatments May Help Restore Speech Lost to Aphasia

Most people know the frustration of having a word on the “tip of your tongue” that they simply can’t remember. But that passing nuisance can be an everyday occurrence for someone with aphasia, a communication disorder caused by a stroke or other brain damage that impairs the ability to process language.

About 1 million Americans — roughly one in every 250 — are affected by aphasia, which can also impact reading and writing skills. But how they acquire the problem and how long they’ll endure it differ from person to person, explained Ellayne Ganzfried, a speech-language pathologist and executive director of the National Aphasia Association.

"No two people with aphasia are alike because everyone’s brain responds to the injury in a different way," Ganzfried said. "About half of people who have aphasia recover quickly, within the first few days. If the symptoms of aphasia last longer than two or three months, a complete recovery is unlikely … [though] some people continue to improve over a period of years and even decades."

Strokes are the most common cause, followed by head injuries, tumors, migraines or other neurological issues. Depending on the damage to the brain regions controlling language, which are typically in the left hemisphere, the resulting aphasia can be broken into four broad categories:

  • Difficulty expressing thoughts through speech or writing
  • Difficulty understanding spoken or written language
  • Difficulty using the correct names for objects, people, places or events
  • Loss of almost all language function, with no ability to speak or understand speech.

"Processing language requires the collaboration of lots of different parts or systems of the brain," explained Karen Riedel, director of speech-language pathology at the Rusk Institute of Rehabilitation Medicine at NYU Langone Medical Center in New York City. "The whole brain ‘talks’ — the whole brain has something to do with the use of language."

Because of this, a variety of therapies are used to help people regain as much speech and language as possible. But regardless of the injury, people with aphasia have the best chances for recovery when language therapy begins immediately, Riedel said.

Because aphasia is so variable, a therapy that helps one person might not help another, she noted. Tried-and-true techniques include melodic intonation therapy, which uses melody and rhythm to help improve the ability to retrieve words, and constraint-induced therapy, which forces people to use speech over other communication methods.

But technology, Riedel said, has introduced new language-improvement techniques into the mix over the last few years that are both exciting and fun. Several apps available for iPhone or iPad involve synthetic speech that helps engage those with aphasia in yet another realm of communication.

"Our patients have much more access to different kinds of programs that are computer-based," she said. "There’s always something new around the corner."

What remains a constant concern, however, is the misunderstanding many people have of those with language difficulties and how to treat them, Ganzfried and Riedel agreed.

"Many people with aphasia will become socially isolated because of their communication difficulties, which can lead to depression," Ganzfried said. "There are also many misconceptions about aphasia, including that the person is mentally unstable or under the influence of drugs or alcohol. It’s also extremely frustrating. Imagine knowing what you want to say in your head but you can’t get the words out."

(Source: consumer.healthday.com)

Filed under brain language disorders speech aphasia neuroscience psychology treatment science

26 notes

Dyslexia Impairs Speech Recognition but Can Spare Phonological Competence
Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.

Dyslexia Impairs Speech Recognition but Can Spare Phonological Competence

Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.

Filed under brain dyslexia language speech speech processing neuroscience psychology science

47 notes


“Doctor” or “Darling”: The Subtle Differences of Speech
Human speech comes in countless varieties: When people talk to close friends or partners, they talk differently than when they address a physician. These differences in speech are quite subtle and hard to pinpoint. In a recent special issue of the journal Frontiers in Human Neuroscience, Johanna Derix, Dr. Tonio Ball, and their colleagues from the Bernstein Center and the University Medical Center in Freiburg report that they were able to tell from brain signals who a person was talking to. This discovery could contribute to the further development of speech synthesizers for patients with severe paralysis.
In contrast to the experimental research common in human neuroscience, the scientists studied natural, non-experimental behavior. Patients who, for medical reasons, had electrodes implanted underneath their skull allowed their brain activity to be recorded during daily life in the hospital. The Freiburg researchers compared data recorded during natural conversations that the patients had with their physicians and their life partners. They found pronounced differences in the anterior temporal lobe, a brain area well known for its significance in social interaction. Several components of neural signals that are detectable on the brain surface can convey such information.
“This study is only the first step towards elucidating the neural basis of human everyday behavior,” explains the neuroscientist and physician Tonio Ball. “Such investigations will become especially important in developing new neurotechnological treatment options for patients with impaired motor and language functions that work in real life situations.” The restoration of speech production becomes necessary in some forms of neurological diseases and chronic paralysis. A computer could synthesize speech for patients suffering from such conditions by using their brain signals. Information on who the patient is addressing could help the device to select the degree of formality – and to prevent it from calling the doctor “darling.”

“Doctor” or “Darling”: The Subtle Differences of Speech

Human speech comes in countless varieties: When people talk to close friends or partners, they talk differently than when they address a physician. These differences in speech are quite subtle and hard to pinpoint. In a recent special issue of the journal Frontiers in Human Neuroscience, Johanna Derix, Dr. Tonio Ball, and their colleagues from the Bernstein Center and the University Medical Center in Freiburg report that they were able to tell from brain signals who a person was talking to. This discovery could contribute to the further development of speech synthesizers for patients with severe paralysis.

In contrast to the experimental research common in human neuroscience, the scientists studied natural, non-experimental behavior. Patients who, for medical reasons, had electrodes implanted underneath their skull allowed their brain activity to be recorded during daily life in the hospital. The Freiburg researchers compared data recorded during natural conversations that the patients had with their physicians and their life partners. They found pronounced differences in the anterior temporal lobe, a brain area well known for its significance in social interaction. Several components of neural signals that are detectable on the brain surface can convey such information.

“This study is only the first step towards elucidating the neural basis of human everyday behavior,” explains the neuroscientist and physician Tonio Ball. “Such investigations will become especially important in developing new neurotechnological treatment options for patients with impaired motor and language functions that work in real life situations.” The restoration of speech production becomes necessary in some forms of neurological diseases and chronic paralysis. A computer could synthesize speech for patients suffering from such conditions by using their brain signals. Information on who the patient is addressing could help the device to select the degree of formality – and to prevent it from calling the doctor “darling.”

Filed under brain neuroscience speech brain signals psychology behavior science

25 notes

Researchers at the Norwegian University of Science and Technology (NTNU) are combining two of the best-known approaches to automatic speech recognition to build a better and language-independent speech-to-text algorithm that can recognize the language being spoken in under a minute, transcribe languages on the brink of extinction, and make the dream of ever present voice-controlled electronics just a little bit closer.
Achieving accurate, real-time speech recognition is no easy feat. Even assuming that the sound acquired by a device can be completely stripped of background noise (which isn’t always the case), there is hardly a one-to-one correspondence between the waveform detected by a microphone and the phoneme being spoken. Different people speak the same language with different nuances – accents, lisps and other articulation defects. Other factors such as age, gender, health and education also play a big role in altering the sound that reaches the microphone.
The NTNU researchers are now pioneering an approach that, if it can be fully exploited, may lead to a big leap in the performance of speech-to-text applications. They demonstrated that the mechanics of human speech are fundamentally the same across all people and across all languages, and they are now training a computer to analyze the pressure of sound waves captured by the microphone to determine which parts of the speech organs were used to produce a phoneme.

Researchers at the Norwegian University of Science and Technology (NTNU) are combining two of the best-known approaches to automatic speech recognition to build a better and language-independent speech-to-text algorithm that can recognize the language being spoken in under a minute, transcribe languages on the brink of extinction, and make the dream of ever present voice-controlled electronics just a little bit closer.

Achieving accurate, real-time speech recognition is no easy feat. Even assuming that the sound acquired by a device can be completely stripped of background noise (which isn’t always the case), there is hardly a one-to-one correspondence between the waveform detected by a microphone and the phoneme being spoken. Different people speak the same language with different nuances – accents, lisps and other articulation defects. Other factors such as age, gender, health and education also play a big role in altering the sound that reaches the microphone.

The NTNU researchers are now pioneering an approach that, if it can be fully exploited, may lead to a big leap in the performance of speech-to-text applications. They demonstrated that the mechanics of human speech are fundamentally the same across all people and across all languages, and they are now training a computer to analyze the pressure of sound waves captured by the microphone to determine which parts of the speech organs were used to produce a phoneme.

Filed under speech recognition technology science neuroscience speech

free counters