Neuroscience

Articles and news from the latest research reports.

Posts tagged communication

86 notes

Kelly the Robot Helps Kids Tackle Autism

Using a kid-friendly robot during behavioral therapy sessions may help some children with autism gain better social skills, a preliminary study suggests.

image

The study, of 19 children with autism spectrum disorders (ASDs), found that kids tended to do better when their visit with a therapist included a robot “co-therapist.” On average, they made bigger gains in social skills such as asking “appropriate” questions, answering questions and making conversational comments.

So-called humanoid robots are already being marketed for this purpose, but there has been little research to back it up.

"Going into this study, we were skeptical," said lead researcher Joshua Diehl, an assistant professor of psychology at the University of Notre Dame in Indiana, who said he has no financial interest in the technology.

"We found that, to our surprise, the kids did better when the robot was added," he said.

There are still plenty of caveats, however, said Diehl, who is presenting his team’s findings Saturday at the International Meeting for Autism Research (IMFAR) in San Sebastian, Spain.

For one, the study was small. And it’s not clear that the results seen in a controlled research setting would be the same in the real world of therapists’ offices, according to Diehl.

"I’d say this is not yet ready for prime time," he said.

ASDs are a group of developmental disorders that affect a person’s ability to communicate and interact socially. The severity of those effects range widely: Some people have mild problems socializing, but have normal to above-normal intelligence; some people have profound difficulties relating to others, and may have intellectual impairment as well.

Experts have become interested in using technology — from robots to iPads — along with standard ASD therapies because it may help bridge some of the communication issues kids have.

Human communication is complex and unpredictable, with body language, facial expressions and other subtle cues coming into the mix, explained Geraldine Dawson, chief science officer for the advocacy group Autism Speaks.

A robot or a computer game, on the other hand, can be programmed to be simple and predictable, and that may help kids with ASDs better process the information they are being given, Dawson said.

"Broadly speaking," she said, "we are very excited about the potential role for technology in diagnosing and treating ASDs." But she also agreed with Diehl that the findings are "very preliminary," and that researchers have a lot more to learn about how technology — robots or otherwise — fits into ASD therapies.

For the study, Diehl’s team used a humanoid robot manufactured by Aldebaran Robotics, which markets the NAO robot for use in education, including special education for kids with ASDs. The robot, which stands at about 2 feet tall, looks like a toy but it’s priced more like a small car, Diehl noted.

The NAO H25 “Academic Edition” rings up at about $16,000. (Diehl said the study was funded by government and private grants, not the manufacturer.)

The researchers had 19 kids aged 6 to 13 complete 12 behavioral therapy sessions, where a therapist worked with the child on social skills. Half of the sessions involved the robot, named Kelly, which was wheeled out so the child could practice conversing with her, while the therapist stood by.

"So the child might say, ‘Hi Kelly, how are you?’" Diehl explained. "Then Kelly would say, ‘Fine. What did you do today?’" During the non-Kelly sessions, another person entered the room and carried on the same conversation with the child that the robot would have.

On average, Diehl’s team found, kids made bigger gains from the sessions that included Kelly — based on both their interactions with their therapists, and their parents’ reports.

"There was one child who, when his dad came home from work, asked him how his day was," Diehl said. "He’d never done that before."

Still, he stressed that while the robot sessions seemed more successful on average, the children varied widely in their responses to Kelly. Going forward, Diehl said, it will be important to figure out whether there are certain kids with ASDs more likely to benefit from a robot co-therapist.

Dawson agreed that there is no one-size-fits-all ASD therapy. “Any therapy for a person with an ASD has to be individualized,” she said. The idea with any technology, she added, is to give therapists and doctors extra “tools” to work with.

A separate study presented at the same meeting looked at another type of tool. Researchers had 60 “minimally verbal” children with ASDs attend two “play-based” sessions per week, aimed at boosting their ability to speak and gesture. Half of the kids were also given a “speech-generating device,” like an iPad.

Three and six months later, children who worked with the devices were able to say more words and were quicker to take up conversational skills.

Dawson said the robot and iPad studies are just part of the growing body of research into how technology can not only aid in ASD therapies, but also help doctors diagnose the disorders or help parents manage at home.

But both Diehl and Dawson stressed that no robot or iPad is intended to stand in for human connection. The idea, after all, is to enhance kids’ ability to communicate and have relationships, Dawson noted. “Technology will never take the place of people,” she said.

The data and conclusions of research presented at meetings should be viewed as preliminary until published in a peer-reviewed journal.

(Source: webmd.com)

Filed under ASD autism humanoid robots robots robotics communication social skills neuroscience psychology science

92 notes

New hope for autistic children who never learn to speak

An Autistica consultation published this month found that 24% of children with autism were non-verbal or minimally verbal, and it is known that these problems can persist into adulthood. Professionals have long attempted to support the development of language in these children but with mixed outcomes. An estimated 600,000 people in the UK and 70 million worldwide have autism, a neuro-developmental condition which is life-long.

Today, scientists at the University of Birmingham publish a paper in Frontiers in Neuroscience showing that while not all of the current interventions used are effective, there is real hope for progress by using interventions based on understanding natural language development and the role of motor and “motor mirroring” behaviour in toddlers.

The researchers, led by Dr Joe McCleery, who is supported by autism research charity Autistica, examined over 200 published papers and more than 60 different intervention studies, and found that:

  • Motor behaviours, such as banging toys and copying gestures or facial expressions (“mirroring”), play a key role in the learning of language.
  • Children with autism show specific motor impairments, and less “mirroring” brain activity, particularly in relation to strangers in whom they show very little interest. This finding may hold the key to language problems overall.
  • Despite extensive use of sign language training to improve speech and communication skills in non-verbal children with autism, there is very little evidence that it makes a positive impact, potentially due to the impairments in motor behaviours and mirroring.
  • Picture exchange training can lead to improvements in speech. Here, children gradually learn to “ask” for things by exchanging pictures. This may work well because it does not depend on complex motor skills or mirroring.
  • Play-based approaches which employ explicit teaching strategies and are developmentally based are particularly successful.
  • New studies involving a focus on motor skills alongside speech and language intervention are showing promising preliminary results. This is exciting because these interventions utilise our new understanding of the role of motor behaviours in the development of speech and social interaction.

With the support of Autistica, the UK’s leading autism research charity, Dr McCleery’s team have now embarked on new work which builds on these findings to design interventions which specifically target the aspects of development where there are deficits in non-verbal autistic children.

Dr McCleery says: “We feel that the field is approaching a turning point, with potentially dramatic breakthroughs to come in both our understanding of communication difficulties in people with autism, and the potential ways we can intervene to make a real difference for those children who are having difficulties learning to speak.”

Christine Swabey, CEO of Autistica, says: “80% of the parents in our recent consultation wanted interventions straight after diagnosis. Dr McCleery’s work shows how critical it is for all intervention to be evidence-based, and that the best approaches are based on a real understanding of the development of difficulties in autism. We are proud to be supporting the next steps in this vital research which will improve the quality of life for people with autism.”

Alison Hardy, whose son Alfie is six, says: “As a parent of an autistic child, who is non-verbal, I feel quite vulnerable. People are always saying “try this, it worked wonders for us”. But you can’t try everything. We need a proper, scientific evidence base for what works and what does not. Then we can focus our time and our effort, with some confidence that we have a chance of helping our children. The publication of this research is an exciting step in giving us that confidence, it is great that Autistica is supporting this vital work.”

(Source: eurekalert.org)

Filed under autism communication language motor development intervention neuroscience science

206 notes

Children of Blind Mothers Learn New Modes of Communication
A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.
Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.
To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.
Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.
The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.
When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.
Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”
"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."
Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”

Children of Blind Mothers Learn New Modes of Communication

A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.

Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.

To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.

Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.

The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.

When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.

Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”

"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."

Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”

Filed under eye contact infants communication social skills autistic traits vision child development psychology neuroscience science

98 notes

Rare primate’s vocal lip-smacks share features of human speech
The vocal lip-smacks that geladas use in friendly encounters have surprising similarities to human speech, according to a study reported in the Cell Press journal Current Biology on April 8th. The geladas, which live only in the remote mountains of Ethiopia, are the only nonhuman primate known to communicate with such a speech-like, undulating rhythm. Calls of other monkeys and apes are typically one or two syllables and lack those rapid fluctuations in pitch and volume.
This new evidence lends support to the idea that lip-smacking, a behavior that many primates show during amiable interactions, could have been an evolutionary step toward human speech.
"Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," said Thore Bergman of the University of Michigan in Ann Arbor. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speech-like sounds."
Bergman first began to wonder about the geladas’ sounds when he began his fieldwork in 2006. “I would find myself frequently looking over my shoulder to see who was talking to me, but it was just the geladas,” he recalled. “It was unnerving to have primate vocalizations sound so much like human voices.”
That was something that he had never experienced in the company of other primates. Then Bergman came across a paper in Current Biology last year proposing vocalization while lip-smacking as a possible first step to human speech, and something clicked.
Bergman has now analyzed recordings of the geladas’ vocalizations, known as “wobbles,” to find a rhythm that closely matches human speech. In other words, because they vocalize while lip-smacking, the pattern of sound produced is structurally similar to human speech.
In both lip-smacking and speech, the rhythm corresponds to the opening and closing of parts of the mouth. What’s more, Bergman said, lip-smacking might serve the same purpose as language in many basic human interactions—think of how friends bond through small talk.
"Language is not just a great tool for exchanging information; it has a social function," Bergman said. "Many verbal exchanges appear to serve a function similar to lip-smacking."

Rare primate’s vocal lip-smacks share features of human speech

The vocal lip-smacks that geladas use in friendly encounters have surprising similarities to human speech, according to a study reported in the Cell Press journal Current Biology on April 8th. The geladas, which live only in the remote mountains of Ethiopia, are the only nonhuman primate known to communicate with such a speech-like, undulating rhythm. Calls of other monkeys and apes are typically one or two syllables and lack those rapid fluctuations in pitch and volume.

This new evidence lends support to the idea that lip-smacking, a behavior that many primates show during amiable interactions, could have been an evolutionary step toward human speech.

"Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," said Thore Bergman of the University of Michigan in Ann Arbor. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speech-like sounds."

Bergman first began to wonder about the geladas’ sounds when he began his fieldwork in 2006. “I would find myself frequently looking over my shoulder to see who was talking to me, but it was just the geladas,” he recalled. “It was unnerving to have primate vocalizations sound so much like human voices.”

That was something that he had never experienced in the company of other primates. Then Bergman came across a paper in Current Biology last year proposing vocalization while lip-smacking as a possible first step to human speech, and something clicked.

Bergman has now analyzed recordings of the geladas’ vocalizations, known as “wobbles,” to find a rhythm that closely matches human speech. In other words, because they vocalize while lip-smacking, the pattern of sound produced is structurally similar to human speech.

In both lip-smacking and speech, the rhythm corresponds to the opening and closing of parts of the mouth. What’s more, Bergman said, lip-smacking might serve the same purpose as language in many basic human interactions—think of how friends bond through small talk.

"Language is not just a great tool for exchanging information; it has a social function," Bergman said. "Many verbal exchanges appear to serve a function similar to lip-smacking."

Filed under primates geladas communication speech vocalization neuroscience science

159 notes

Songbirds’ brains coordinate singing with intricate timing
As a bird sings, some neurons in its brain prepare to make the next sounds while others are synchronized with the current notes—a coordination of physical actions and brain activity that is needed to produce complex movements, new research at the University of Chicago shows.
In an article in the current issue of Nature, neuroscientist Daniel Margoliash and colleagues show, for the first time, how the brain is organized to govern skilled performance—a finding that may lead to new ways of understanding human speech production.
The new study shows that birds’ physical movements actually are made up of a multitude of smaller actions. “It is amazing that such small units of movements are encoded, and so precisely, at the level of the forebrain,” said Margoliash, a professor of organismal biology and anatomy and psychology at UChicago.
“This work provides new insight into how the physics of controlling vocal signals are represented in the brain to control vocalizations,” said Howard Nusbaum, a professor of psychology at UChicago and an expert on speech.
By decoding the neural representation of communication, Nusbaum explained, the research may shed light on speech problems such as stuttering or aphasia (a disorder following a stroke). And it offers an unusual window into how the brain and body carry out other kinds of complex movement, from throwing a ball to doing a backflip.
“A big question in muscle control is how the motor system organizes the dynamics of movement,” said Margoliash. Movements like reaching or grasping are difficult to study because they entail many variables, such as the angles of the shoulder, elbow, wrist and fingers; the forces of many muscles; and how these change over time,” he said.
"With all this complexity, it has been difficult to determine which of the many variables that describe movements are represented in the brain, and which of those are used to control movements," he said.
"It’s difficult to find a natural framework with which to analyze the activity of single neurons. The bird study provided us a perfect opportunity,” Margoliash said. Margoliash is a pioneer in the study of brain function in birds, with studies that include how learning occurs when a bird sleeps and recalls singing a song.

Songbirds’ brains coordinate singing with intricate timing

As a bird sings, some neurons in its brain prepare to make the next sounds while others are synchronized with the current notes—a coordination of physical actions and brain activity that is needed to produce complex movements, new research at the University of Chicago shows.

In an article in the current issue of Nature, neuroscientist Daniel Margoliash and colleagues show, for the first time, how the brain is organized to govern skilled performance—a finding that may lead to new ways of understanding human speech production.

The new study shows that birds’ physical movements actually are made up of a multitude of smaller actions. “It is amazing that such small units of movements are encoded, and so precisely, at the level of the forebrain,” said Margoliash, a professor of organismal biology and anatomy and psychology at UChicago.

“This work provides new insight into how the physics of controlling vocal signals are represented in the brain to control vocalizations,” said Howard Nusbaum, a professor of psychology at UChicago and an expert on speech.

By decoding the neural representation of communication, Nusbaum explained, the research may shed light on speech problems such as stuttering or aphasia (a disorder following a stroke). And it offers an unusual window into how the brain and body carry out other kinds of complex movement, from throwing a ball to doing a backflip.

“A big question in muscle control is how the motor system organizes the dynamics of movement,” said Margoliash. Movements like reaching or grasping are difficult to study because they entail many variables, such as the angles of the shoulder, elbow, wrist and fingers; the forces of many muscles; and how these change over time,” he said.

"With all this complexity, it has been difficult to determine which of the many variables that describe movements are represented in the brain, and which of those are used to control movements," he said.

"It’s difficult to find a natural framework with which to analyze the activity of single neurons. The bird study provided us a perfect opportunity,” Margoliash said. Margoliash is a pioneer in the study of brain function in birds, with studies that include how learning occurs when a bird sleeps and recalls singing a song.

Filed under songbirds brain activity vocalizations communication motor system speech production neuroscience science

360 notes

How human language could have evolved from birdsong

Linguistics and biology researchers propose a new theory on the deep roots of human speech.

image

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”

Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.

“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.

The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.

Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.

“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”

These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.

“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”

A new chapter in the songbook

The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.

To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”

Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.

By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.

Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.

“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.

As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”

Birds and bees

Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”

Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.

The researchers acknowledge that further empirical studies on the subject would be desirable.

“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”

Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.

MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.

“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”

(Source: web.mit.edu)

Filed under brain evolution linguistics communication language birdsong neuroscience science

121 notes

Fear, anger or pain. Why do babies cry?
Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.
It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.
"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.
Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.
In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.
According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.
"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.
As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.
The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.
Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.
For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.
Anger, fear and pain
When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.
As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.
In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.
Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Fear, anger or pain. Why do babies cry?

Spanish researchers have studied adults’ accuracy in the recognition of the emotion causing babies to cry. Eye movement and the dynamic of the cry play a key role in recognition.

It is not easy to know why a newborn cries, especially amongst first-time parents. Although the main reasons are hunger, pain, anger and fear, adults cannot easily recognise which emotion is the cause of the tears.

"Crying is a baby’s principal means of communicating its negative emotions and in the majority of cases the only way they have to express them," as explained to SINC by Mariano Chóliz, researcher at the University of Valencia.

Chóliz participates in a study along with experts from the University of Murcia and the National University of Distance Education (UNED) which describes the differences in the weeping pattern in a sample of 20 babies between 3 and 18 months caused by the three characteristic emotions: fear, anger and pain.

In addition, the team observed the accuracy of adults in recognising the emotion that causes the babies to cry, analysing the affective reaction of observers before the sobbing.

According to the results published recently in the ‘Spanish Journal of Psychology’, the main differences manifest in eye activity and the dynamics of the cry.

"When babies cry because of anger or fear, they keep their eyes open but keep them closed when crying in pain," states the researcher.

As for the dynamic of the cry, both the gestures and the intensity of the cry gradually increase if the baby is angry. On the contrary, the cry is as intense as can be in the case of pain and fear.

The adults do not properly identify which emotion is causing the cry, especially in the case of anger and fear.

Nonetheless, “although the observers cannot recognise the cause properly, when babies cry because they are in pain, this causes a more intense affective reaction than when they cry because of angry or fear,” outlines Chóliz.

For the experts, the fact that pain is the most easily recognisable emotion can have an adaptive explanation, since crying is a warning of a potentially serious threat to health or survival and thus requires the carer to respond urgently.

Anger, fear and pain

When a baby cries, facial muscle activity is characterised by lots of tension in the forehead, eyebrows or lips, opening of the mouth and raised cheeks. The researchers observed different patterns between the three negative emotions.

As Chóliz notices, when angry the majority of babies keep their eyes half-closed, either looking in apparently no direction or in a fixed and prominent manner. Their mouth is either open or half-open and the intensity of their cry increases progressively.

In the case of fear, the eyes remain open almost all the time. Furthermore, at times the infants have a penetrating look and move their head backwards. Their cry seems to be explosive after a gradual increase in tension.

Lastly, pain manifests as constantly closed eyes and when the eyes do open it is only for a few moments and a distant look is held. In addition, there is a high level of tension in the eye area and the forehead remains frowned. The cry begins at maximum intensity, starting suddenly and immediately after the stimulus.

Filed under infants emotions emotional response cry communication eye activity psychology neuroscience science

409 notes

Why Do Humans Cry? Scientist Says Tears Served as a Means of Communication Before the Evolution of Language
Leading expert in neurology Michael Trimble, British professor at the Institute of Neurology in London, says that there must have been a time in human evolution when tears represented something greater than their simple function of lubricating the eye.
In his new book, Why Humans Like To Cry, Trimble tries to explain the mystery of why humans are the only species in the animal kingdom to shed tears in response to an emotional state. In his book, Trimble  examines the physiology and the evolutionary past of emotional crying.
Trimble explains that biologically, tears are important to protect the eye.  They keep the eyeball moist, flush out irritants and contain certain proteins and substances that keep the eye healthy and fight infections. He explains that in every other animal on planet Earth, tears seem to only serve these biological purposes.
However, in humans, crying or sobbing, bawling or weeping seems to serve another purpose: communicating emotion. Humans cry for many reasons- out of joy, grief, anger, relief and a variety of other emotions. However, our tears are most frequently shed out of sadness. Trimble said that it was this specific communicative nature of human crying that piqued his interest.
"Humans cry for many reasons," he told Scientific American. "But crying for emotional reasons and crying in response to aesthetic experiences are unique to us."
"The former is most associated with loss and bereavement, and the art forms that are most associated with tears are music, literature and poetry," he said. "There are very few people who cry looking at paintings, sculptures or lovely buildings. But we also have tears of joy the associated feelings of which last a shorter time than crying in the other circumstances."

Why Do Humans Cry? Scientist Says Tears Served as a Means of Communication Before the Evolution of Language

Leading expert in neurology Michael Trimble, British professor at the Institute of Neurology in London, says that there must have been a time in human evolution when tears represented something greater than their simple function of lubricating the eye.

In his new book, Why Humans Like To Cry, Trimble tries to explain the mystery of why humans are the only species in the animal kingdom to shed tears in response to an emotional state. In his book, Trimble examines the physiology and the evolutionary past of emotional crying.

Trimble explains that biologically, tears are important to protect the eye.  They keep the eyeball moist, flush out irritants and contain certain proteins and substances that keep the eye healthy and fight infections. He explains that in every other animal on planet Earth, tears seem to only serve these biological purposes.

However, in humans, crying or sobbing, bawling or weeping seems to serve another purpose: communicating emotion. Humans cry for many reasons- out of joy, grief, anger, relief and a variety of other emotions. However, our tears are most frequently shed out of sadness. Trimble said that it was this specific communicative nature of human crying that piqued his interest.

"Humans cry for many reasons," he told Scientific American. "But crying for emotional reasons and crying in response to aesthetic experiences are unique to us."

"The former is most associated with loss and bereavement, and the art forms that are most associated with tears are music, literature and poetry," he said. "There are very few people who cry looking at paintings, sculptures or lovely buildings. But we also have tears of joy the associated feelings of which last a shorter time than crying in the other circumstances."

Filed under crying communication evolution emotional response emotion psychology neuroscience science

59 notes

In Alzheimer’s Disease, Maintaining Connection and ‘Saving Face’
I’ve decided that all older men with gray beards must look alike, because each week I am mistaken for someone else. But, if I were to shave my beard - which I have worn for over 40 years - I believe that my friends and colleagues would fail to recognize me. I would be a different person to them because of this small, physical change.
If such a small change affects the way people see me, then the larger mental changes that Alzheimer’s patients experience must truly and deeply change the way their loved ones see them. Dr. Daniel Potts, a neurologist at the University of Alabama, has begun studying the concept of “saving face” and preserving the “person” in people with dementia.
Dr. Potts’ father, Lester Potts, became an acclaimed watercolor artist after his Alzheimer’s diagnosis. He had lost his verbal abilities but could express his feelings through his art. This bolstered his retention of self-worth and dignity. His paintbrush let him bypass the part of his brain that Alzheimer’s blocked, and communicate in a new way.
But before we find out more about art and Alzheimer’s patients, let’s go back to the “face” part of saving face for just a moment.
Read more

In Alzheimer’s Disease, Maintaining Connection and ‘Saving Face’

I’ve decided that all older men with gray beards must look alike, because each week I am mistaken for someone else. But, if I were to shave my beard - which I have worn for over 40 years - I believe that my friends and colleagues would fail to recognize me. I would be a different person to them because of this small, physical change.

If such a small change affects the way people see me, then the larger mental changes that Alzheimer’s patients experience must truly and deeply change the way their loved ones see them. Dr. Daniel Potts, a neurologist at the University of Alabama, has begun studying the concept of “saving face” and preserving the “person” in people with dementia.

Dr. Potts’ father, Lester Potts, became an acclaimed watercolor artist after his Alzheimer’s diagnosis. He had lost his verbal abilities but could express his feelings through his art. This bolstered his retention of self-worth and dignity. His paintbrush let him bypass the part of his brain that Alzheimer’s blocked, and communicate in a new way.

But before we find out more about art and Alzheimer’s patients, let’s go back to the “face” part of saving face for just a moment.

Read more

(Source: The Atlantic)

Filed under alzheimer's disease dementia communication neuroscience psychology science

free counters