Neuroscience

Articles and news from the latest research reports.

Posts tagged communication

146 notes

Device to help people with Parkinson’s disease communicate better now available
SpeechVive Inc. announced Wednesday (Sept. 10) the commercial launch of the SpeechVive device intended to help people with a soft voice due to Parkinson’s disease speak more loudly and communicate more effectively.
The device is now available to try as a demo through the National Parkinson’s Disease Foundation’s Centers of Excellence prior to purchasing. People who suffer from a soft voice due to Parkinson’s disease can make an appointment at any of these centers: the Muhammad Ali Parkinson Center at Barrow Neurological Institute in Phoenix; the University of Florida, Gainesville, Florida; University of North Carolina, Chapel Hill, North Carolina; Struthers Parkinson’s Center, Minneapolis, Minnesota; and Baylor College of Medicine, Waco, Texas.
"We are providing demo units and training at no cost to as many of the National Parkinson’s Centers of Excellence as are interested in offering SpeechVive in conjunction with or as an alternative to speech therapy," said Steve Mogensen, president and CEO of SpeechVive. "We also are offering the SpeechVive units and training to professionals at Veterans Administration Medical Centers across the country. The first VAMC to offer SpeechVive is in Cincinnati, Ohio."
The SpeechVive device also is available to try at the M.D. Steer Speech and Hearing Clinic at Purdue University in West Lafayette, Indiana.
The technology was developed over the past decade by Jessica Huber, associate professor in Purdue’s Department of Speech, Language and Hearing Sciences and licensed through the Purdue Office of Technology Commercialization. The focus of Huber’s research is the development and testing of behavioral treatments to improve communication and quality of life in older adults and people with degenerative motor diseases.
SpeechVive reduces the speech impairments associated with Parkinson’s disease, which cause people with the disease to speak in a hushed, whispery voice and to have mumbled speech. People with Parkinson’s disease are commonly affected in their ability to communicate effectively.
"The clinical data we have collected over the past four years demonstrates that SpeechVive is effective in 90 percent of the people using the device," Huber said. "I am proud of the improvements in communication and quality of life demonstrated in our clinical studies. I look forward to seeing the device on the market so that more people with Parkinson’s disease will have access to it."
More than 1.5 million people in the United States are diagnosed with Parkinson’s disease, and it is one of the most common degenerative neurological diseases. About 89 percent of those with the disease have voice-related change affecting how loudly they speak, and at least 45 percent have speech-related change affecting how clearly they speak.

Device to help people with Parkinson’s disease communicate better now available

SpeechVive Inc. announced Wednesday (Sept. 10) the commercial launch of the SpeechVive device intended to help people with a soft voice due to Parkinson’s disease speak more loudly and communicate more effectively.

The device is now available to try as a demo through the National Parkinson’s Disease Foundation’s Centers of Excellence prior to purchasing. People who suffer from a soft voice due to Parkinson’s disease can make an appointment at any of these centers: the Muhammad Ali Parkinson Center at Barrow Neurological Institute in Phoenix; the University of Florida, Gainesville, Florida; University of North Carolina, Chapel Hill, North Carolina; Struthers Parkinson’s Center, Minneapolis, Minnesota; and Baylor College of Medicine, Waco, Texas.

"We are providing demo units and training at no cost to as many of the National Parkinson’s Centers of Excellence as are interested in offering SpeechVive in conjunction with or as an alternative to speech therapy," said Steve Mogensen, president and CEO of SpeechVive. "We also are offering the SpeechVive units and training to professionals at Veterans Administration Medical Centers across the country. The first VAMC to offer SpeechVive is in Cincinnati, Ohio."

The SpeechVive device also is available to try at the M.D. Steer Speech and Hearing Clinic at Purdue University in West Lafayette, Indiana.

The technology was developed over the past decade by Jessica Huber, associate professor in Purdue’s Department of Speech, Language and Hearing Sciences and licensed through the Purdue Office of Technology Commercialization. The focus of Huber’s research is the development and testing of behavioral treatments to improve communication and quality of life in older adults and people with degenerative motor diseases.

SpeechVive reduces the speech impairments associated with Parkinson’s disease, which cause people with the disease to speak in a hushed, whispery voice and to have mumbled speech. People with Parkinson’s disease are commonly affected in their ability to communicate effectively.

"The clinical data we have collected over the past four years demonstrates that SpeechVive is effective in 90 percent of the people using the device," Huber said. "I am proud of the improvements in communication and quality of life demonstrated in our clinical studies. I look forward to seeing the device on the market so that more people with Parkinson’s disease will have access to it."

More than 1.5 million people in the United States are diagnosed with Parkinson’s disease, and it is one of the most common degenerative neurological diseases. About 89 percent of those with the disease have voice-related change affecting how loudly they speak, and at least 45 percent have speech-related change affecting how clearly they speak.

Filed under parkinson's disease speech speechvive communication neuroscience science

156 notes

Hand gestures improve learning in both signers and speakers

Spontaneous gesture can help children learn, whether they use a spoken language or sign language, according to a new report.

image

Previous research by Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in the Department of Psychology, has found that gesture helps children develop their language, learning and cognitive skills. As one of the nation’s leading authorities on language learning and gesture, she has also studied how using gesture helps older children improve their mathematical skills.

Goldin-Meadow’s new study examines how gesturing contributes to language learning in hearing and in deaf children. She concludes that gesture is a flexible way of communicating, one that can work with language to communicate or, if necessary, can itself become language. The article is published online by Philosophical Transactions of the Royal Society B and will appear in the Sept. 19 print issue of the journal, which is a theme issue on “Language as a Multimodal Phenomenon.”

“Children who can hear use gesture along with speech to communicate as they acquire spoken language, “Goldin-Meadow said. “Those gesture-plus-word combinations precede and predict the acquisition of word combinations that convey the same notions. The findings make it clear that children have an understanding of these notions before they are able to express them in speech.”

In addition to children who learned spoken languages, Goldin-Meadow studied children who learned sign language from their parents. She found that they too use gestures as they use American Sign Language. These gestures predict learning, just like the gestures that accompany speech.

Finally, Goldin-Meadow looked at deaf children whose hearing losses prevented them from learning spoken language, and whose hearing parents had not presented them with conventional sign language. These children use homemade gesture systems, called homesign, to communicate. Homesign shares properties in common with natural languages but is not a full-blown language, perhaps because the children lack “a community of communication partners,” Goldin-Meadow writes. Nevertheless, homesign can be the “first step toward an established sign language.” In Nicaragua, individual gesture systems blossomed into a more complex, shared system when homesigners were brought together for the first time.

These findings provide insight into gesture’s contribution to learning. Gesture plays a role in learning for signers even though it is in the same modality as sign. As a result, gesture cannot aid learners simply by providing a second modality. Rather, gesture adds imagery to the categorical distinctions that form the core of both spoken and sign languages.

Goldin-Meadow concludes that gesture can be the basis for a self-made language, assuming linguistic forms and functions when other vehicles are not available. But when a conventional spoken or sign language is present, gesture works along with language, helping to promote learning.

(Source: news.uchicago.edu)

Filed under gestures language acquisition learning communication homesign neuroscience science

152 notes

People understand hyperbole through intent of communication

People tend to understand nonliteral language – metaphor, hyperbole and exaggerated statements – when they realize the purpose of the communication, according to new Stanford research.

Noah Goodman, an assistant professor of psychology at Stanford, believes that figurative language – the nuanced ways that people use language to communicate meanings different than the literal meaning of their words – is one of the deepest mysteries of human communication.

"Human communication," he said, "is rife with nonliteral language that includes metaphor, irony and hyperbole. When we say ‘Juliet is the sun’ or ‘That watch cost a million dollars,’ listeners read through the direct meanings – which are often false if taken literally – to understand subtle connotations."

image

'Sharp' vs. 'round' numbers

To understand this communication dynamic, Goodman, director of the Computation and Cognition Lab at Stanford, and his colleagues used computational modeling. Stanford graduate student Justine Kao was the first author on the paper, which included co-authors Jean Wu, a former graduate student at Stanford, and Leon Bergen of the Massachusetts Institute of Technology.

In their lab, they develop computational models that use pragmatic reasoning to interpret metaphorical utterances. Their research for this particular project involved four online experiments with 340 subjects.

Participants in the experiments read different scenarios involving hyperbole. For example, a person bought a watch and was asked by a friend whether it was expensive. That person responded with different figures, ranging from low- to high-cost figures – such as $50, $51, $10,000 or $10,001. Given this, the participants rated the probability of the purchaser thinking it was an expensive watch or not.

People tended to interpret “sharp numbers” – such as a watch costing $51 – more precisely than “round numbers,” as in a watch costing $50. 

The findings suggest that even creative and figurative language may follow predictable and rational principles.

Kao said, “This research advances our understanding of communication by providing evidence that reasoning about a speaker’s goals is critical for understanding nonliteral language. We were able to capture nuanced and nonliteral interpretations of number words using a computational model.”

Common ground

The research showed that if listeners are trying to understand the topic and goal of communication as well as the underlying subtext – that which is not expressed explicitly – they’re better able to truly understand the utterance. A sense of common knowledge about what is being described or expressed is also important. Speakers and listeners assume that individuals are rational agents who use common ground and reference points to best maximize information.

As Kao put it, “There is still a long way to go before computers can understand Shakespeare, but it is a start.”

Goodman offered this example: Imagine someone describing a new restaurant, and she says, “It took 30 minutes to get a table.” People are most likely to interpret this to mean she waited about 30 minutes. But if she says, “It took a million years to get a table,” people will probably interpret this to mean that the wait was shorter than a million years, but that the person thinks it was much too long.

"One of the most fascinating facts about communication is that people do not always mean what they say – a crucial part of the listener’s job is to understand an utterance even when its literal meaning is false," the researchers wrote.

Goodman said the computational model he and his colleagues use to understand nonliteral utterances integrates empirically measured background knowledge, communication principles and reasoning about communication goals.

What is next in line research-wise?

Goodman and the others said they believe that the same ideas and techniques can extend to metaphor, irony and many other uses of language. For example, they have a promising initial exploration of “is a” metaphors such as “your lawyer is a shark,” Goodman said.

"Beyond these cases of figurative speech, the overall mathematical framework is beginning to give a precise theory of natural language understanding that takes into account context, intention and many subtle shades of meaning," he said, adding, "There is a lot more work to do."

(Source: news.stanford.edu)

Filed under communication language nonliteral language hyperbole pragmatics psychology neuroscience science

839 notes

What sign language teaches us about the brain
The world’s leading humanoid robot, ASIMO, has recently learnt sign language. The news of this breakthrough came just as I completed Level 1 of British Sign Language (I dare say it took me longer to master signing than it did the robot!). As a neuroscientist, the experience of learning to sign made me think about how the brain perceives this means of communicating.
For instance, during my training, I found that mnemonics greatly simplified my learning process. To sign the colour blue you use the fingers of your right hand to rub the back of your left hand, my simple mnemonic for this sign being that the veins on the back of our hand appear blue. I was therefore forming an association between the word blue (English), the sign for blue (BSL), and the visual aid that links the two. However, the two languages differ markedly in that one relies on sounds and the other on visual signs.
Do our brains process these languages differently? It seems that for the most part, they don’t. And it turns out that brain studies of sign language users have helped bust a few myths.
Read more

What sign language teaches us about the brain

The world’s leading humanoid robot, ASIMO, has recently learnt sign language. The news of this breakthrough came just as I completed Level 1 of British Sign Language (I dare say it took me longer to master signing than it did the robot!). As a neuroscientist, the experience of learning to sign made me think about how the brain perceives this means of communicating.

For instance, during my training, I found that mnemonics greatly simplified my learning process. To sign the colour blue you use the fingers of your right hand to rub the back of your left hand, my simple mnemonic for this sign being that the veins on the back of our hand appear blue. I was therefore forming an association between the word blue (English), the sign for blue (BSL), and the visual aid that links the two. However, the two languages differ markedly in that one relies on sounds and the other on visual signs.

Do our brains process these languages differently? It seems that for the most part, they don’t. And it turns out that brain studies of sign language users have helped bust a few myths.

Read more

Filed under sign language neuroimaging communication lesion studies neuroscience science

370 notes

Gestures that speak
When you gesticulate you don’t just add a “note of colour” that makes your speech more pleasant: you convey information on sentence structure and make your meanings clearer. A study carried out at SISSA in Trieste demonstrates that gestures and “prosody” (the intonation and rhythm of spoken language) form  a  single “communication system” at the cognitive level, and that we speak using our “whole  body” and not only our vocal tract.
Have you ever found yourself gesticulating and felt a bit stupid for it while talking on the phone?
You’re not alone: it happens very often that people accompany their speech with hand gestures, sometimes even when no one can see them. Why can’t we keep still while speaking? “Because gestures and words very probably form a single “communication system”, which ultimately serves to enhance expression intended as the ability to make oneself understood”, explains Marina Nespor, a neuroscientist at the International School for Advanced Studies (SISSA) of Trieste. Nespor, together with Alan Langus, a SISSA research fellow, and Bahia Guellai from the Université Paris Ouest Nanterre La Défence, who conducted the investigation at SISSA, has just published a study in Frontiers in Psychology which demonstrates the role of gestures in speech “prosody”.
Linguists define prosody as the intonation and rhythm of spoken language, features that help to highlight sentence structure and therefore make the message easier to understand. For example, without prosody, nothing would distinguish the declarative statement “this is an apple” from the surprise question “this is an apple?” (in this case the difference lies in the intonation).
According to Nespor and colleagues, even hand gestures are part of prosody: “the prosody that accompanies speech is not ‘modality specific” explains Langus. “Prosodic information, for the person receiving the message, is a combination of auditory and visual cues. The ‘superior’ aspects (at the cognitive processing level) of spoken language are mapped to the motor‐programs responsible for the production of both speech sounds and accompanying hand gestures”.
Nespor, Langus and Guellai had 20 Italian speakers listen to a series of “ambiguous” utterances, which could be said with different prosodies corresponding to two different meanings. Examples of utterances were “come sicuramente hai visto la vecchia sbarra la porta” where, depending on meaning, “vecchia” can be the subject of the main verb (sbarrare, to block) or an adjective qualifying the subject (sbarra, bar) (‘As you for sure have seen the old lady blocks the door’ versus ‘As you for sure have seen the old bar carries it’). The utterances could be simply listened to (“audio only” modality) or be presented in a video, where the participants could both listen to the sentences and see the accompanying gestures. In the “video” stimuli, the condition could be “matched” (gestures corresponding to the meaning conveyed by speech prosody) or “mismatched” (gestures matching the alternative meaning).
“In the matched conditions there was no improvement ascribable to gestures: the  participants’ performance was very good both in the video and in the “audio only” sessions. It’s in the mismatched condition that the effect of hand gestures became apparent”, explains Langus. “With these stimuli the subjects were much more likely to make the wrong choice (that is, they’d choose the meaning indicated in the gestures rather than in the speech) compared to matched or audio only conditions. This means that gestures affect how meaning is interpreted, and we believe this points to the existence of a common cognitive system for gestures, intonation and rhythm of spoken language”.
“In human communication, voice is not sufficient: even the torso and in particular hand movements are involved, as are facial expressions”, concludes Nespor.

Gestures that speak

When you gesticulate you don’t just add a “note of colour” that makes your speech more pleasant: you convey information on sentence structure and make your meanings clearer. A study carried out at SISSA in Trieste demonstrates that gestures and “prosody” (the intonation and rhythm of spoken language) form  a  single “communication system” at the cognitive level, and that we speak using our “whole  body” and not only our vocal tract.

Have you ever found yourself gesticulating and felt a bit stupid for it while talking on the phone?

You’re not alone: it happens very often that people accompany their speech with hand gestures, sometimes even when no one can see them. Why can’t we keep still while speaking? “Because gestures and words very probably form a single “communication system”, which ultimately serves to enhance expression intended as the ability to make oneself understood”, explains Marina Nespor, a neuroscientist at the International School for Advanced Studies (SISSA) of Trieste. Nespor, together with Alan Langus, a SISSA research fellow, and Bahia Guellai from the Université Paris Ouest Nanterre La Défence, who conducted the investigation at SISSA, has just published a study in Frontiers in Psychology which demonstrates the role of gestures in speech “prosody”.

Linguists define prosody as the intonation and rhythm of spoken language, features that help to highlight sentence structure and therefore make the message easier to understand. For example, without prosody, nothing would distinguish the declarative statement “this is an apple” from the surprise question “this is an apple?” (in this case the difference lies in the intonation).

According to Nespor and colleagues, even hand gestures are part of prosody: “the prosody that accompanies speech is not ‘modality specific” explains Langus. “Prosodic information, for the person receiving the message, is a combination of auditory and visual cues. The ‘superior’ aspects (at the cognitive processing level) of spoken language are mapped to the motor‐programs responsible for the production of both speech sounds and accompanying hand gestures”.

Nespor, Langus and Guellai had 20 Italian speakers listen to a series of “ambiguous” utterances, which could be said with different prosodies corresponding to two different meanings. Examples of utterances were “come sicuramente hai visto la vecchia sbarra la porta” where, depending on meaning, “vecchia” can be the subject of the main verb (sbarrare, to block) or an adjective qualifying the subject (sbarra, bar) (‘As you for sure have seen the old lady blocks the door’ versus ‘As you for sure have seen the old bar carries it’). The utterances could be simply listened to (“audio only” modality) or be presented in a video, where the participants could both listen to the sentences and see the accompanying gestures. In the “video” stimuli, the condition could be “matched” (gestures corresponding to the meaning conveyed by speech prosody) or “mismatched” (gestures matching the alternative meaning).

“In the matched conditions there was no improvement ascribable to gestures: the  participants’ performance was very good both in the video and in the “audio only” sessions. It’s in the mismatched condition that the effect of hand gestures became apparent”, explains Langus. “With these stimuli the subjects were much more likely to make the wrong choice (that is, they’d choose the meaning indicated in the gestures rather than in the speech) compared to matched or audio only conditions. This means that gestures affect how meaning is interpreted, and we believe this points to the existence of a common cognitive system for gestures, intonation and rhythm of spoken language”.

“In human communication, voice is not sufficient: even the torso and in particular hand movements are involved, as are facial expressions”, concludes Nespor.

Filed under gestures prosody communication speech perception psychology neuroscience science

484 notes

Finding thoughts in speech
For the first time, neuroscientists were able to find out how different thoughts are reflected in neuronal activity during natural conversations. Johanna Derix, Olga Iljina and the interdisciplinary team of Dr. Tonio Ball from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and the Epilepsy Center of the University Medical Center Freiburg (Freiburg, Germany) report on the link between speech, thoughts and brain responses in a special issue of Frontiers in Human Neuroscience.
"Thoughts are difficult to investigate, as one cannot observe in a direct manner what the person is thinking about. Language, however, reflects the underlying mental processes, so we can perform linguistic analyses of the subjects’ speech and use such information as a "bridge" between the neuronal processes and the subject’s thoughts," explains neuroscientist Johanna Derix.
The novelty of the authors’ approach is that the participants were not instructed to think and talk about a given topic in an experimental setting. Instead, the researchers analysed everyday conversations and the underlying brain activity, which was recorded directly from the cortical surface. This study was possible owing to the help of epilepsy patients in whom recordings of neural activity had to be obtained over several days for the purpose of pre-neurosurgical diagnostics.
For a start, borders between individual thoughts in continuous conversations had to be identified. Earlier psycholinguistic research indicates that a simple sentence is a suitable unit to contain a single thought, so the researchers opted for linguistic segmentation into simple sentences. The resulting “idea” units were classified into different categories. These included, for example, whether or not a sentence expressed memory- or self-related content. Then, the researchers analysed content-specific neural responses and observed clearly visible patterns of brain activity.
Thus, the neuroscientists from Freiburg have demonstrated the feasibility of their innovative approach to investigate, via speech, how the human brain processes thoughts during real-life conditions.

Finding thoughts in speech

For the first time, neuroscientists were able to find out how different thoughts are reflected in neuronal activity during natural conversations. Johanna Derix, Olga Iljina and the interdisciplinary team of Dr. Tonio Ball from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and the Epilepsy Center of the University Medical Center Freiburg (Freiburg, Germany) report on the link between speech, thoughts and brain responses in a special issue of Frontiers in Human Neuroscience.

"Thoughts are difficult to investigate, as one cannot observe in a direct manner what the person is thinking about. Language, however, reflects the underlying mental processes, so we can perform linguistic analyses of the subjects’ speech and use such information as a "bridge" between the neuronal processes and the subject’s thoughts," explains neuroscientist Johanna Derix.

The novelty of the authors’ approach is that the participants were not instructed to think and talk about a given topic in an experimental setting. Instead, the researchers analysed everyday conversations and the underlying brain activity, which was recorded directly from the cortical surface. This study was possible owing to the help of epilepsy patients in whom recordings of neural activity had to be obtained over several days for the purpose of pre-neurosurgical diagnostics.

For a start, borders between individual thoughts in continuous conversations had to be identified. Earlier psycholinguistic research indicates that a simple sentence is a suitable unit to contain a single thought, so the researchers opted for linguistic segmentation into simple sentences. The resulting “idea” units were classified into different categories. These included, for example, whether or not a sentence expressed memory- or self-related content. Then, the researchers analysed content-specific neural responses and observed clearly visible patterns of brain activity.

Thus, the neuroscientists from Freiburg have demonstrated the feasibility of their innovative approach to investigate, via speech, how the human brain processes thoughts during real-life conditions.

Filed under speech production neural activity thinking prefrontal cortex communication autobiographical memory neuroscience science

219 notes

You Took the Words Right Out of My Brain
Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.
“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”
“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”
The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.
Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.
However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”
What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.
In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?
Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.
When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.
When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.
“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”

You Took the Words Right Out of My Brain

Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.

“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”

“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”

The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.

Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.

However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”

What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.

In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?

Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.

When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.

When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.

“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”

Filed under language communication brain activity auditory cortex psychology neuroscience science

209 notes

What Does Compassion Sound Like?

“Good to see you. I’m sorry. It sounds like you’ve had a tough, tough, week.”  Spoken by a doctor to a cancer patient, that statement is an example of compassionate behavior observed by a University of Rochester Medical Center team in a new study published by the journal Health Expectations.

image

Rochester researchers believe they are the first to systematically pinpoint and catalogue compassionate words and actions in doctor-patient conversations. By breaking down the dialogue and studying the context, scientists hope to create a behavioral taxonomy that will guide medical training and education.

“In health care, we believe in being compassionate but the reality is that many of us have a preference for technical and biomedical issues over establishing emotional ties,” said senior investigator Ronald Epstein, M.D., professor of Family Medicine, Psychiatry, Oncology, and Nursing and director of the UR Center for Communication and Disparities Research.

Epstein is a national and international keynote speaker and investigator on mindfulness and communication in medical education.

His team recruited 23 oncologists from a variety of private and hospital-based oncology clinics in the Rochester, N.Y., area. The doctors and their stage III or stage IV cancer patients volunteered to be recorded during routine visits. Researchers then analyzed the 49 audio-recorded encounters that took place between November 2011 and June 2012, and looked for key observable markers of compassion.  

In contrast to empathy – another quality that Epstein and his colleagues have studied in the medical community — compassion involves a deeper and more active imagination of the patient’s condition. An important part of this study, therefore, was to identify examples of the three main elements of compassion: recognition of suffering, emotional resonance, and movement towards addressing suffering.

Emotional resonance, or a sense of sharing and connection, was illustrated by this dialogue: Patient: “I should just get a room here.” Oncologist: “Oh, I hope you don’t really feel like you’re spending that much time here.”

Another conversation included this response from a physician to a patient, who complained about a drug patch for pain: “Who wants a patch that makes you drowsy, constipated and fuzzy? I’ll pass, thank you very much.”

Some doctors provided good examples of how they use humor to raise a patient’s spirits without deviating from the seriousness of the situation. In one case, for example, a patient was concerned that he would not be able to drink two liters of barium sulfite in preparation for a CT scan.

Doctor: “If you just get down one little cup it will tell us what’s going on in the stomach. What I tell people when we’re not being recorded is to take a cup and then pour the rest down the toilet and tell them you drank it all (laughter)… Just a creative interpretation of what you are supposed to take.”

Patient: “I love it, I love it. Well, I thank you for that. I’m prepared to do what I’ve got to do to get this right.”

Researchers evaluated tone of voice, animation that conveyed tenderness and understanding, and other ways in which doctors gave reassurances or psychology comfort.

Here’s an instance in which an oncologist encouraged a reluctant patient to follow through with a planned trip to Arizona: “You know, if you decide to do it, break down and allow somebody to meet you at the gates and use a cart or wheelchair to get you to your next gate and things like that. And having just sent my father-in-law off to Hawaii and told him he had to do that, he said no, no, I can get there. Just, it’s okay. Nobody is gonna look at you and say, ‘What’s an able-bodied man doing in a cart?’ Just, it’s okay. It’s part of setting limits.”

Researchers also observed non-verbal communication, such as pauses or sighs at appropriate times, as well as speech features and voice quality (tone, pitch, loudness) and other metaphorical language that conveyed certain attitudes and meaning.

Compassion unfolds over time, researchers concluded. During the process, physicians must challenge themselves to stay with a difficult discussion, which opens the door for the patient to admit uncertainty and grieve the loss of normalcy in life.

“It became apparent that compassion is not a quality of a single utterance but rather is made up of presence and engagement that suffuses an entire conversation,” the study said. First author, Rachel Cameron, B.A., is a student at the University of Rochester School of Medicine and Dentistry; the audio-recordings were reviewed by a diverse group of medical professionals with backgrounds in literature and linguistics, as well as palliative care specialists.

(Source: urmc.rochester.edu)

Filed under empathy doctor-patient relationship compassion communication medicine

207 notes

iPads help late-speaking children with autism develop language
The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.
In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.
“For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”
Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.
The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”
As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.
Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.
In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.
In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.
In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.
Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.

iPads help late-speaking children with autism develop language

The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.

In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.

For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”

Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.

The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”

As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.

Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.

In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.

In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.

In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.

Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.

Filed under autism ASD language language development communication psychology neuroscience science

86 notes

Kelly the Robot Helps Kids Tackle Autism

Using a kid-friendly robot during behavioral therapy sessions may help some children with autism gain better social skills, a preliminary study suggests.

image

The study, of 19 children with autism spectrum disorders (ASDs), found that kids tended to do better when their visit with a therapist included a robot “co-therapist.” On average, they made bigger gains in social skills such as asking “appropriate” questions, answering questions and making conversational comments.

So-called humanoid robots are already being marketed for this purpose, but there has been little research to back it up.

"Going into this study, we were skeptical," said lead researcher Joshua Diehl, an assistant professor of psychology at the University of Notre Dame in Indiana, who said he has no financial interest in the technology.

"We found that, to our surprise, the kids did better when the robot was added," he said.

There are still plenty of caveats, however, said Diehl, who is presenting his team’s findings Saturday at the International Meeting for Autism Research (IMFAR) in San Sebastian, Spain.

For one, the study was small. And it’s not clear that the results seen in a controlled research setting would be the same in the real world of therapists’ offices, according to Diehl.

"I’d say this is not yet ready for prime time," he said.

ASDs are a group of developmental disorders that affect a person’s ability to communicate and interact socially. The severity of those effects range widely: Some people have mild problems socializing, but have normal to above-normal intelligence; some people have profound difficulties relating to others, and may have intellectual impairment as well.

Experts have become interested in using technology — from robots to iPads — along with standard ASD therapies because it may help bridge some of the communication issues kids have.

Human communication is complex and unpredictable, with body language, facial expressions and other subtle cues coming into the mix, explained Geraldine Dawson, chief science officer for the advocacy group Autism Speaks.

A robot or a computer game, on the other hand, can be programmed to be simple and predictable, and that may help kids with ASDs better process the information they are being given, Dawson said.

"Broadly speaking," she said, "we are very excited about the potential role for technology in diagnosing and treating ASDs." But she also agreed with Diehl that the findings are "very preliminary," and that researchers have a lot more to learn about how technology — robots or otherwise — fits into ASD therapies.

For the study, Diehl’s team used a humanoid robot manufactured by Aldebaran Robotics, which markets the NAO robot for use in education, including special education for kids with ASDs. The robot, which stands at about 2 feet tall, looks like a toy but it’s priced more like a small car, Diehl noted.

The NAO H25 “Academic Edition” rings up at about $16,000. (Diehl said the study was funded by government and private grants, not the manufacturer.)

The researchers had 19 kids aged 6 to 13 complete 12 behavioral therapy sessions, where a therapist worked with the child on social skills. Half of the sessions involved the robot, named Kelly, which was wheeled out so the child could practice conversing with her, while the therapist stood by.

"So the child might say, ‘Hi Kelly, how are you?’" Diehl explained. "Then Kelly would say, ‘Fine. What did you do today?’" During the non-Kelly sessions, another person entered the room and carried on the same conversation with the child that the robot would have.

On average, Diehl’s team found, kids made bigger gains from the sessions that included Kelly — based on both their interactions with their therapists, and their parents’ reports.

"There was one child who, when his dad came home from work, asked him how his day was," Diehl said. "He’d never done that before."

Still, he stressed that while the robot sessions seemed more successful on average, the children varied widely in their responses to Kelly. Going forward, Diehl said, it will be important to figure out whether there are certain kids with ASDs more likely to benefit from a robot co-therapist.

Dawson agreed that there is no one-size-fits-all ASD therapy. “Any therapy for a person with an ASD has to be individualized,” she said. The idea with any technology, she added, is to give therapists and doctors extra “tools” to work with.

A separate study presented at the same meeting looked at another type of tool. Researchers had 60 “minimally verbal” children with ASDs attend two “play-based” sessions per week, aimed at boosting their ability to speak and gesture. Half of the kids were also given a “speech-generating device,” like an iPad.

Three and six months later, children who worked with the devices were able to say more words and were quicker to take up conversational skills.

Dawson said the robot and iPad studies are just part of the growing body of research into how technology can not only aid in ASD therapies, but also help doctors diagnose the disorders or help parents manage at home.

But both Diehl and Dawson stressed that no robot or iPad is intended to stand in for human connection. The idea, after all, is to enhance kids’ ability to communicate and have relationships, Dawson noted. “Technology will never take the place of people,” she said.

The data and conclusions of research presented at meetings should be viewed as preliminary until published in a peer-reviewed journal.

(Source: webmd.com)

Filed under ASD autism humanoid robots robots robotics communication social skills neuroscience psychology science

free counters