Neuroscience

Articles and news from the latest research reports.

Posts tagged gestures

156 notes

Hand gestures improve learning in both signers and speakers

Spontaneous gesture can help children learn, whether they use a spoken language or sign language, according to a new report.

image

Previous research by Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in the Department of Psychology, has found that gesture helps children develop their language, learning and cognitive skills. As one of the nation’s leading authorities on language learning and gesture, she has also studied how using gesture helps older children improve their mathematical skills.

Goldin-Meadow’s new study examines how gesturing contributes to language learning in hearing and in deaf children. She concludes that gesture is a flexible way of communicating, one that can work with language to communicate or, if necessary, can itself become language. The article is published online by Philosophical Transactions of the Royal Society B and will appear in the Sept. 19 print issue of the journal, which is a theme issue on “Language as a Multimodal Phenomenon.”

“Children who can hear use gesture along with speech to communicate as they acquire spoken language, “Goldin-Meadow said. “Those gesture-plus-word combinations precede and predict the acquisition of word combinations that convey the same notions. The findings make it clear that children have an understanding of these notions before they are able to express them in speech.”

In addition to children who learned spoken languages, Goldin-Meadow studied children who learned sign language from their parents. She found that they too use gestures as they use American Sign Language. These gestures predict learning, just like the gestures that accompany speech.

Finally, Goldin-Meadow looked at deaf children whose hearing losses prevented them from learning spoken language, and whose hearing parents had not presented them with conventional sign language. These children use homemade gesture systems, called homesign, to communicate. Homesign shares properties in common with natural languages but is not a full-blown language, perhaps because the children lack “a community of communication partners,” Goldin-Meadow writes. Nevertheless, homesign can be the “first step toward an established sign language.” In Nicaragua, individual gesture systems blossomed into a more complex, shared system when homesigners were brought together for the first time.

These findings provide insight into gesture’s contribution to learning. Gesture plays a role in learning for signers even though it is in the same modality as sign. As a result, gesture cannot aid learners simply by providing a second modality. Rather, gesture adds imagery to the categorical distinctions that form the core of both spoken and sign languages.

Goldin-Meadow concludes that gesture can be the basis for a self-made language, assuming linguistic forms and functions when other vehicles are not available. But when a conventional spoken or sign language is present, gesture works along with language, helping to promote learning.

(Source: news.uchicago.edu)

Filed under gestures language acquisition learning communication homesign neuroscience science

370 notes

Gestures that speak
When you gesticulate you don’t just add a “note of colour” that makes your speech more pleasant: you convey information on sentence structure and make your meanings clearer. A study carried out at SISSA in Trieste demonstrates that gestures and “prosody” (the intonation and rhythm of spoken language) form  a  single “communication system” at the cognitive level, and that we speak using our “whole  body” and not only our vocal tract.
Have you ever found yourself gesticulating and felt a bit stupid for it while talking on the phone?
You’re not alone: it happens very often that people accompany their speech with hand gestures, sometimes even when no one can see them. Why can’t we keep still while speaking? “Because gestures and words very probably form a single “communication system”, which ultimately serves to enhance expression intended as the ability to make oneself understood”, explains Marina Nespor, a neuroscientist at the International School for Advanced Studies (SISSA) of Trieste. Nespor, together with Alan Langus, a SISSA research fellow, and Bahia Guellai from the Université Paris Ouest Nanterre La Défence, who conducted the investigation at SISSA, has just published a study in Frontiers in Psychology which demonstrates the role of gestures in speech “prosody”.
Linguists define prosody as the intonation and rhythm of spoken language, features that help to highlight sentence structure and therefore make the message easier to understand. For example, without prosody, nothing would distinguish the declarative statement “this is an apple” from the surprise question “this is an apple?” (in this case the difference lies in the intonation).
According to Nespor and colleagues, even hand gestures are part of prosody: “the prosody that accompanies speech is not ‘modality specific” explains Langus. “Prosodic information, for the person receiving the message, is a combination of auditory and visual cues. The ‘superior’ aspects (at the cognitive processing level) of spoken language are mapped to the motor‐programs responsible for the production of both speech sounds and accompanying hand gestures”.
Nespor, Langus and Guellai had 20 Italian speakers listen to a series of “ambiguous” utterances, which could be said with different prosodies corresponding to two different meanings. Examples of utterances were “come sicuramente hai visto la vecchia sbarra la porta” where, depending on meaning, “vecchia” can be the subject of the main verb (sbarrare, to block) or an adjective qualifying the subject (sbarra, bar) (‘As you for sure have seen the old lady blocks the door’ versus ‘As you for sure have seen the old bar carries it’). The utterances could be simply listened to (“audio only” modality) or be presented in a video, where the participants could both listen to the sentences and see the accompanying gestures. In the “video” stimuli, the condition could be “matched” (gestures corresponding to the meaning conveyed by speech prosody) or “mismatched” (gestures matching the alternative meaning).
“In the matched conditions there was no improvement ascribable to gestures: the  participants’ performance was very good both in the video and in the “audio only” sessions. It’s in the mismatched condition that the effect of hand gestures became apparent”, explains Langus. “With these stimuli the subjects were much more likely to make the wrong choice (that is, they’d choose the meaning indicated in the gestures rather than in the speech) compared to matched or audio only conditions. This means that gestures affect how meaning is interpreted, and we believe this points to the existence of a common cognitive system for gestures, intonation and rhythm of spoken language”.
“In human communication, voice is not sufficient: even the torso and in particular hand movements are involved, as are facial expressions”, concludes Nespor.

Gestures that speak

When you gesticulate you don’t just add a “note of colour” that makes your speech more pleasant: you convey information on sentence structure and make your meanings clearer. A study carried out at SISSA in Trieste demonstrates that gestures and “prosody” (the intonation and rhythm of spoken language) form  a  single “communication system” at the cognitive level, and that we speak using our “whole  body” and not only our vocal tract.

Have you ever found yourself gesticulating and felt a bit stupid for it while talking on the phone?

You’re not alone: it happens very often that people accompany their speech with hand gestures, sometimes even when no one can see them. Why can’t we keep still while speaking? “Because gestures and words very probably form a single “communication system”, which ultimately serves to enhance expression intended as the ability to make oneself understood”, explains Marina Nespor, a neuroscientist at the International School for Advanced Studies (SISSA) of Trieste. Nespor, together with Alan Langus, a SISSA research fellow, and Bahia Guellai from the Université Paris Ouest Nanterre La Défence, who conducted the investigation at SISSA, has just published a study in Frontiers in Psychology which demonstrates the role of gestures in speech “prosody”.

Linguists define prosody as the intonation and rhythm of spoken language, features that help to highlight sentence structure and therefore make the message easier to understand. For example, without prosody, nothing would distinguish the declarative statement “this is an apple” from the surprise question “this is an apple?” (in this case the difference lies in the intonation).

According to Nespor and colleagues, even hand gestures are part of prosody: “the prosody that accompanies speech is not ‘modality specific” explains Langus. “Prosodic information, for the person receiving the message, is a combination of auditory and visual cues. The ‘superior’ aspects (at the cognitive processing level) of spoken language are mapped to the motor‐programs responsible for the production of both speech sounds and accompanying hand gestures”.

Nespor, Langus and Guellai had 20 Italian speakers listen to a series of “ambiguous” utterances, which could be said with different prosodies corresponding to two different meanings. Examples of utterances were “come sicuramente hai visto la vecchia sbarra la porta” where, depending on meaning, “vecchia” can be the subject of the main verb (sbarrare, to block) or an adjective qualifying the subject (sbarra, bar) (‘As you for sure have seen the old lady blocks the door’ versus ‘As you for sure have seen the old bar carries it’). The utterances could be simply listened to (“audio only” modality) or be presented in a video, where the participants could both listen to the sentences and see the accompanying gestures. In the “video” stimuli, the condition could be “matched” (gestures corresponding to the meaning conveyed by speech prosody) or “mismatched” (gestures matching the alternative meaning).

“In the matched conditions there was no improvement ascribable to gestures: the  participants’ performance was very good both in the video and in the “audio only” sessions. It’s in the mismatched condition that the effect of hand gestures became apparent”, explains Langus. “With these stimuli the subjects were much more likely to make the wrong choice (that is, they’d choose the meaning indicated in the gestures rather than in the speech) compared to matched or audio only conditions. This means that gestures affect how meaning is interpreted, and we believe this points to the existence of a common cognitive system for gestures, intonation and rhythm of spoken language”.

“In human communication, voice is not sufficient: even the torso and in particular hand movements are involved, as are facial expressions”, concludes Nespor.

Filed under gestures prosody communication speech perception psychology neuroscience science

71 notes

Gestures of Human and Ape Infants Are More Similar Than You Might Expect
Thirteen years after the release of On the Origin of Species, Charles Darwin published another report on the evolution of mankind. In the 1872 book The Expression of the Emotions in Man and Animals, the naturalist argued that people from different cultures exhibit any given emotion through the same facial expression. This hypothesis didn’t quite pan out—last year, researchers poked a hole in the idea by showing that the expression of emotions such as anger, happiness and fear wasn’t universal (PDF). Nonetheless, certain basic things—such as the urge to cry out in pain, an increase in blood pressure when feeling anger, even shrugging when we don’t understand something—cross cultures.
A new study, published today in the journal Frontiers in Psychology, compares such involuntary responses, but with an added twist: Some observable behaviors aren’t only universal to the human species, but to our closest relatives too—chimpanzees and bonobos.
Using video analysis, a team of UCLA researchers found that human, chimpanzee and bonobo babies make similar gestures when interacting with caregivers. Members of all three species reach with their arms and hands for objects or people, and point with their fingers or heads. They also raise their arms up, a motion indicating that they want to be picked up, in the same manner. Such gestures, which seemed to be innate in all three species, precede and eventually lead to the development of language in humans, the researchers say.
To pick up on these behaviors, the team studied three babies of differing species through videos taken over a number of months. The child stars of these videos included a chimpanzee named Panpanzee, a bonobo called Panbanisha and a human girl, identified as GN. The apes were raised together at the Georgia State University Language Research Center in Atlanta, where researchers study language and cognitive processes in chimps, monkeys and humans. There, Panpanzee and Panbanisha were taught to communicate with their human caregivers using gestures, noises and lexigrams, abstract symbols that represent words. The human child grew up in her family’s home, where her parents facilitated her learning.
Researchers filmed the child’s development for seven months, starting when she was 11 months old, while the apes were taped from 12 months of age to 26 months. In the early stages of the study, the observed gestures were of a communicative nature: all three infants engaged in the behavior with the intention of conveying how their emotions and needs. They made eye contact with their caregivers, added non-verbal vocalizations to their movements or exerted physical effort to elicit a response.
By the second half of the experiment, the production of communicative symbols—visual ones for the apes, vocal ones for the human—increased. As she grew older, the human child began using more spoken words, while the chimpanzee and bonobo learned and used more lexigrams. Eventually, the child began speaking to convey what she felt, rather than only gesturing. The apes, on the other hand, continued to rely on gestures. The study calls this divergence in behavior “the first indication of a distinctive human pathway to language.”
The researchers speculate that the matching behaviors can be traced to the last shared ancestor of humans, chimps and bobonos, who lived between four and seven million years ago. That ancestor probably exhibited the same early gestures, which all three species then inherited. When the species diverged, humans managed to build on this communicative capacity by eventually graduating to speech.
Hints of this can be seen in how the human child paired her gestures with non-speech vocalizations, the precursors to words, far more than the apes did. It’s this successful combinationof gestures and words that may have led to the birth of human language.

Gestures of Human and Ape Infants Are More Similar Than You Might Expect

Thirteen years after the release of On the Origin of Species, Charles Darwin published another report on the evolution of mankind. In the 1872 book The Expression of the Emotions in Man and Animals, the naturalist argued that people from different cultures exhibit any given emotion through the same facial expression. This hypothesis didn’t quite pan out—last year, researchers poked a hole in the idea by showing that the expression of emotions such as anger, happiness and fear wasn’t universal (PDF). Nonetheless, certain basic things—such as the urge to cry out in pain, an increase in blood pressure when feeling anger, even shrugging when we don’t understand something—cross cultures.

A new study, published today in the journal Frontiers in Psychology, compares such involuntary responses, but with an added twist: Some observable behaviors aren’t only universal to the human species, but to our closest relatives too—chimpanzees and bonobos.

Using video analysis, a team of UCLA researchers found that human, chimpanzee and bonobo babies make similar gestures when interacting with caregivers. Members of all three species reach with their arms and hands for objects or people, and point with their fingers or heads. They also raise their arms up, a motion indicating that they want to be picked up, in the same manner. Such gestures, which seemed to be innate in all three species, precede and eventually lead to the development of language in humans, the researchers say.

To pick up on these behaviors, the team studied three babies of differing species through videos taken over a number of months. The child stars of these videos included a chimpanzee named Panpanzee, a bonobo called Panbanisha and a human girl, identified as GN. The apes were raised together at the Georgia State University Language Research Center in Atlanta, where researchers study language and cognitive processes in chimps, monkeys and humans. There, Panpanzee and Panbanisha were taught to communicate with their human caregivers using gestures, noises and lexigrams, abstract symbols that represent words. The human child grew up in her family’s home, where her parents facilitated her learning.

Researchers filmed the child’s development for seven months, starting when she was 11 months old, while the apes were taped from 12 months of age to 26 months. In the early stages of the study, the observed gestures were of a communicative nature: all three infants engaged in the behavior with the intention of conveying how their emotions and needs. They made eye contact with their caregivers, added non-verbal vocalizations to their movements or exerted physical effort to elicit a response.

By the second half of the experiment, the production of communicative symbols—visual ones for the apes, vocal ones for the human—increased. As she grew older, the human child began using more spoken words, while the chimpanzee and bonobo learned and used more lexigrams. Eventually, the child began speaking to convey what she felt, rather than only gesturing. The apes, on the other hand, continued to rely on gestures. The study calls this divergence in behavior “the first indication of a distinctive human pathway to language.”

The researchers speculate that the matching behaviors can be traced to the last shared ancestor of humans, chimps and bobonos, who lived between four and seven million years ago. That ancestor probably exhibited the same early gestures, which all three species then inherited. When the species diverged, humans managed to build on this communicative capacity by eventually graduating to speech.

Hints of this can be seen in how the human child paired her gestures with non-speech vocalizations, the precursors to words, far more than the apes did. It’s this successful combinationof gestures and words that may have led to the birth of human language.

Filed under language development evolution gestures primates symbolic development psychology neuroscience science

37 notes

Children With Brain Lesions Able To Use Gestures Important To Language Learning
Children with brain lesions suffered before or around the time of birth are able to use gestures – an important aspect of the language learning process– to convey simple sentences, a Georgia State University researcher has found.
Şeyda Özçalışkan, assistant professor of psychology, and fellow researchers at the University of Chicago, looked at children who suffered lesions to one side of the brain to see whether they used gestures similar to typically developing children. She examined gestures such as pointing to a cookie while saying “eat” to convey the meaning “eat cookie,” several months before expressing such sentences exclusively in speech.
“We do know that children with brain injuries show an amazing amount of plasticity (the ability to change) for language learning if they acquire lesions early in life,” Özçalışkan said. “However, we did not know whether this plasticity was characterized by the same developmental trajectory shown for typically developing children, with gesture leading the way into speech.  We looked at the onset of different sentence constructions in children with early brain injuries, and wanted to find out if we could see precursors of different sentence types in gesture.
“For children with brain injuries, we found that this pattern holds, similar to typically developing children,” she said. “Children with unilateral brain injuries produce different kinds of simple sentences several months later than typically developing children. More important, the delays we observe in producing different sentences in speech are preceded by a similar delay in producing the same sentences in gesture-speech combinations.”
Children with brain injuries also had a more difficult time in producing complex sentences across gesture and speech, such as conveying relationships between actions, for example saying “help me do it” while making a painting gesture.
“This in turn was later reflected in a much narrower range of complex sentence types expressed in their speech,” Özçalışkan said. “This suggested to us, in general, that producing sentences across gesture and speech may serve as an embodied sensorimotor experience, that might help children take the next developmental step in producing these sentences in speech.
“And if you bypass the gesture-speech combination stage, that might negatively affect developing a broader representation of complex sentence types in speech.”
The researchers also compared children with smaller brain lesions against children with large lesions, and found more of a delay in producing sentences, both in speech and in gesture-speech combinations, in children with large lesions.
The research has implications for developing interventions to help children with the language learning process, “as it shows that gestures are integral to the process of language learning even when that learning is taking place in an injured brain,” Özçalışkan said.
“When children do different kinds of sentence combinations across gesture and speech, that’s like a signal to the caregiver that ‘I’m ready for this,’” she said. “The caregiver can then provide relevant input to the child, and that could in turn help the child take the next developmental step in producing that sentence entirely in speech.”

Children With Brain Lesions Able To Use Gestures Important To Language Learning

Children with brain lesions suffered before or around the time of birth are able to use gestures – an important aspect of the language learning process– to convey simple sentences, a Georgia State University researcher has found.

Şeyda Özçalışkan, assistant professor of psychology, and fellow researchers at the University of Chicago, looked at children who suffered lesions to one side of the brain to see whether they used gestures similar to typically developing children. She examined gestures such as pointing to a cookie while saying “eat” to convey the meaning “eat cookie,” several months before expressing such sentences exclusively in speech.

“We do know that children with brain injuries show an amazing amount of plasticity (the ability to change) for language learning if they acquire lesions early in life,” Özçalışkan said. “However, we did not know whether this plasticity was characterized by the same developmental trajectory shown for typically developing children, with gesture leading the way into speech.  We looked at the onset of different sentence constructions in children with early brain injuries, and wanted to find out if we could see precursors of different sentence types in gesture.

“For children with brain injuries, we found that this pattern holds, similar to typically developing children,” she said. “Children with unilateral brain injuries produce different kinds of simple sentences several months later than typically developing children. More important, the delays we observe in producing different sentences in speech are preceded by a similar delay in producing the same sentences in gesture-speech combinations.”

Children with brain injuries also had a more difficult time in producing complex sentences across gesture and speech, such as conveying relationships between actions, for example saying “help me do it” while making a painting gesture.

“This in turn was later reflected in a much narrower range of complex sentence types expressed in their speech,” Özçalışkan said. “This suggested to us, in general, that producing sentences across gesture and speech may serve as an embodied sensorimotor experience, that might help children take the next developmental step in producing these sentences in speech.

“And if you bypass the gesture-speech combination stage, that might negatively affect developing a broader representation of complex sentence types in speech.”

The researchers also compared children with smaller brain lesions against children with large lesions, and found more of a delay in producing sentences, both in speech and in gesture-speech combinations, in children with large lesions.

The research has implications for developing interventions to help children with the language learning process, “as it shows that gestures are integral to the process of language learning even when that learning is taking place in an injured brain,” Özçalışkan said.

“When children do different kinds of sentence combinations across gesture and speech, that’s like a signal to the caregiver that ‘I’m ready for this,’” she said. “The caregiver can then provide relevant input to the child, and that could in turn help the child take the next developmental step in producing that sentence entirely in speech.”

Filed under children brain lesions gestures language learning speech neuroscience science

free counters