Posts tagged vocalization

Posts tagged vocalization
Pair bonding reinforced in the brain
In addition to their song, songbirds also have an extensive repertoire of calls. While the species-specific song must be learned as a young bird, most calls are, as in the case of all other birds, innate. Researchers at the Max Planck Institute in Seewiesen have now discovered that in zebra finches the song control system in the brain is also active during simple communication calls. This relationship between unlearned calls and an area of the brain responsible for learned vocalisations is important for understanding the evolution of song learning in songbirds.
Almost half of all bird species are songbirds. Only they have the ability to learn complicated vocal patterns which are described generally as song. Several studies prove that the songs of songbirds serve mainly to select a partner and defend a territory. In the temperate zones of the Northern hemisphere, usually only the male birds sing.
However, all birds, both male and female, have calls - including species such as the zebra finch, where the female never sings. Apart from a few exceptions, the calls do not have to be learned and are used for communication purposes. They are mostly associated with a specific purpose as in the case of alarm calls and contact calls, for example. The songbird’s song is of great interest for neurobiologists as it is controlled by a network of nuclei in the forebrain. Neuroscientists study this network to investigate general rules that determine how the brain controls behaviour.
Using specially designed methods to record song and brain activity, a team of researchers at the Max Planck Institute for Ornithology in Seewiesen has now found the neuronal basis of unlearned call communication. The researchers developed ultra-light microphone transmitters which they attached with rubber bands to the backs of zebra finch couples like rucksacks. They also attached a wireless recording system to the males to measure brain activity.
Thanks to this miniature telemetry technology, the animals could move freely in groups in large aviaries so that the scientists were able to continuously register the animals’ entire behavioural repertoire. In their experiment, the researchers concentrated on so-called “stack” calls. They discovered that these calls mainly promote cohesion between males and females within bonded pairs. “Constant contact with a partner is important, as the zebra finches live in large social groups,” says Lisa Trost, co-author of the study.
Surprisingly, not every call produces an answer in the partner, which initially presented the researchers with a problem during the analysis. They determined that a call from a partner only qualifies as an answer if it is made within two seconds. “We were thus able to create a matrix that clearly showed that almost without exception the two partners exchange calls with one another, which underlines the important social component of this ‘stack’ call,” says Andries Ter Maat, lead author of the study.
When the researchers analysed the activity in an area of the brain that is important for the production of song – an area known as nucleus RA – they found a clear correlation between its activity pattern and the occurrence of the “stack” call. “This connection between an innate call and the activity of a brain area important to learned vocalisations suggests that during the evolution of songbirds, the role of the song area in the brain changed from being a simple vocalisation system for innate calls to a specialised neural network for learned songs,” concludes Manfred Gahr, coordinator of the study.

Rare primate’s vocal lip-smacks share features of human speech
The vocal lip-smacks that geladas use in friendly encounters have surprising similarities to human speech, according to a study reported in the Cell Press journal Current Biology on April 8th. The geladas, which live only in the remote mountains of Ethiopia, are the only nonhuman primate known to communicate with such a speech-like, undulating rhythm. Calls of other monkeys and apes are typically one or two syllables and lack those rapid fluctuations in pitch and volume.
This new evidence lends support to the idea that lip-smacking, a behavior that many primates show during amiable interactions, could have been an evolutionary step toward human speech.
"Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," said Thore Bergman of the University of Michigan in Ann Arbor. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speech-like sounds."
Bergman first began to wonder about the geladas’ sounds when he began his fieldwork in 2006. “I would find myself frequently looking over my shoulder to see who was talking to me, but it was just the geladas,” he recalled. “It was unnerving to have primate vocalizations sound so much like human voices.”
That was something that he had never experienced in the company of other primates. Then Bergman came across a paper in Current Biology last year proposing vocalization while lip-smacking as a possible first step to human speech, and something clicked.
Bergman has now analyzed recordings of the geladas’ vocalizations, known as “wobbles,” to find a rhythm that closely matches human speech. In other words, because they vocalize while lip-smacking, the pattern of sound produced is structurally similar to human speech.
In both lip-smacking and speech, the rhythm corresponds to the opening and closing of parts of the mouth. What’s more, Bergman said, lip-smacking might serve the same purpose as language in many basic human interactions—think of how friends bond through small talk.
"Language is not just a great tool for exchanging information; it has a social function," Bergman said. "Many verbal exchanges appear to serve a function similar to lip-smacking."

Innate ability to vocalize: Deaf or not, courting male mice make same sounds
Scientists have long thought mice might be a model for how humans learn to vocalize. But new research led by scientists at Washington State University Vancouver has found that, unlike humans and songbirds, mice do not learn to vocalize.
The results, published in the Journal of Neuroscience, point the way to a more finely focused, genetic tool for teasing out the mysteries of speech and its disorders.
To see if mice learn to vocalize, WSU neurophysiologist Christine Portfors destroyed the ear hair cells in more than a dozen newborn male mice. The cells convert sound waves into electrical signals processed by the brain, making hearing possible.
The deaf mice were then raised with hearing mice in a normal social environment.
Portfors and her fellow researchers, including WSU graduate student Elena Mahrt, used males because they are particularly exuberant vocalizers in the presence of females.
"We can elicit vocalization behavior in males really easily by just putting them with a female,” Portfors said. "They vocalize like crazy.”
And it turned out that it didn’t matter if the mouse was deaf or not. The researchers catalogued essentially the same suite of ultrasonic sounds from both the deaf and hearing mice. “It means that they don’t need to hear to be able to produce their sounds, their vocalizations,” Portfors said. “Basically, they don’t need to hear themselves. They don’t need auditory feedback. They don’t need to learn.”
The finding means mice are out as a model to study vocal learning. However, scientists can now focus on the mouse to learn the genetic mechanism behind communication disorders.
"If you don’t have learning as a variable, you can look at the genetic control of these things,” Portfors said. "You can look at the genetic control of the output of the signal. It’s not messed up by an animal that’s been in a particular learning situation.”
(Image: Fotolia)

Language Protein Differs in Males, Females
Male rat pups have more of a specific brain protein associated with language development than females, according to a study published February 20 in The Journal of Neuroscience. The study also found sex differences in the brain protein in a small group of children. The findings may shed light on sex differences in communication in animals and language acquisition in people.
Sex differences in early language acquisition and development in children are well documented — on average, girls tend to speak earlier and with greater complexity than boys of the same age. However, scientists continue to debate the origin and significance of such differences. Previous studies showed the Foxp2 protein plays an important role in speech and language development in humans and vocal communication in birds and other mammals.
In the current study, J. Michael Bowers, PhD, Margaret McCarthy, PhD, and colleagues at the University of Maryland School of Medicine examined whether sex differences in the expression of the Foxp2 protein in the developing brain might underlie communication differences between the sexes.
The researchers analyzed the levels of Foxp2 protein in the brains of four-day-old female and male rats and compared the ultrasonic distress calls made by the animals when separated from their mothers and siblings. Compared with females, males had more of the protein in brain areas associated with cognition, emotion, and vocalization. They also made more noise than females — producing nearly double the total vocalizations over the five-minute separation period — and were preferentially retrieved and returned to the nest first by the mother.
When the researchers reduced levels of the Foxp2 protein in the male pups and increased it in female pups, they reversed the sex difference in the distress calls, causing males to sound like females and the females like males. This change led the mother to reverse her behavior as well, preferentially retrieving the females over the males.
“This study is one of the first to report a sex difference in the expression of a language-associated protein in humans or animals,” McCarthy said. “The findings raise the possibility that sex differences in brain and behavior are more pervasive and established earlier than previously appreciated.”
The researchers extended their findings to humans in a preliminary study of Foxp2 protein in a small group of children. Unlike the rats, in which Foxp2 protein was elevated in males, they found that in humans, the girls had more of the Foxp2 protein in the cortex — a brain region associated with language — than age-matched boys.
“At first glance, one might conclude that the findings in rats don’t generalize to humans, but the higher levels of Foxp2 expression are found in the more communicative sex in each species,” noted Cheryl Sisk, who studies sex differences at Michigan State University and was not involved with the study.
The question ‘How do songbirds sing?’ is addressed in a study published in BioMed Central’s open access journal BMC Biology. High-field magnetic resonance imaging and micro-computed tomography have been used to construct stunning high resolution, 3D, images, as well as a data set “morphome” of the zebra finch (Taeniopygia guttata) vocal organ, the syrinx.
Like humans, songbirds learn their vocalizations by imitation. Since their songs are used for finding a mate and retaining territories, birdsong is very important for reproductive success.
The syrinx, located at the point where the trachea splits in two to send air to the lungs, is unique to birds and performs the same function as vocal cords in humans. Birds can have such a complete control over the syrinx, with sub-millisecond precision, that in some cases they are even able to mimic human speech.
Despite great inroads in uncovering the neural control of birdsong, the anatomy of the complex physical structures that generate sound have been less well understood.
The multinational team has generated interactive 3D PDF models of the syringeal skeleton, soft tissues, cartilaginous pads, and muscles affecting sound production. These models show in detail the delicate balance between strength, and lightness of bones and cartilage required to support and alter the vibrating membranes of the syrinx at superfast speeds.
Dr Coen Elemans, from the University of Southern Denmark, who led this study, explained, “This study provides the basis to analyze the micromechanics, and exact neural and muscular control of the syrinx. For example, we describe a cartilaginous structure which may allow the zebra finch to precisely control its songs by uncoupling sound frequency and volume.” In addition, the researchers found a previously unrecognized Y-shaped structure on the sternum which corresponds to the shape of the syrinx and could help stabilize sound production.

An elephant that speaks Korean
An Asian elephant named Koshik can imitate human speech, speaking words in Korean that can be readily understood by those who know the language. The elephant accomplishes this in a most unusual way: he vocalizes with his trunk in his mouth.
The elephant’s vocabulary consists of exactly five words, researchers report on November 1 in Current Biology, a Cell Press publication. Those include “annyong” (“hello”), “anja” (“sit down”), “aniya” (“no”), “nuo” (“lie down”), and “choah” (“good”). Ultimately, Koshik’s language skills may provide important insights into the biology and evolution of complex vocal learning, an ability that is critical for human speech and music, the researchers say.
"Human speech basically has two important aspects, pitch and timbre," says Angela Stoeger of the University of Vienna. "Intriguingly, the elephant Koshik is capable of matching both pitch and timbre patterns: he accurately imitates human formants as well as the voice pitch of his trainers. This is remarkable considering the huge size, the long vocal tract, and other anatomical differences between an elephant and a human."
Baby songbirds learn to sing by imitation, just as human babies do. So researchers at Harvard and Utrecht University, in the Netherlands, have been studying the brains of zebra finches—red-beaked, white-breasted songbirds—for clues to how young birds and human infants learn vocalization on a neuronal level.
While a baby bird mimicking the chirps of his “tutor” may seem far removed from human learning, the researchers at the two universities found that the songs of the birds and human language are both processed in similar areas on the left sides of the two very different brains. The discovery was published last month in the Proceedings of the National Academy of Sciences.