Posts tagged vocal learning

Posts tagged vocal learning

Why don’t apes have musical talent, while humans, parrots, small birds, elephants, whales, and bats do? Matz Larsson, senior physician at the Lung Clinic at Örebro University Hospital, attempts to answer this question in the scientific publication Animal Cognition.
In his article, he asserts that the ability to mimic and imitate things like music and speech is the result of the fact that synchronised group movement quite simply makes it possible to perceive sounds from the surroundings better.
The hypothesis is that the evolution of vocal learning, that is musical traits, is influenced by the need of a species to deal with the disturbing sounds that are created in connection with locomotion. These sounds can affect our hearing only when we move.
“When several people with legs of roughly the same length move together, we tend to unconsciously move in rhythm. When our footsteps occur simultaneously, a brief interval of silence occurs. In the middle of each stride we can hear our surroundings better. It becomes easier to hear a pursuer, and perhaps easier to conduct a conversation as well,” explains Larsson.
A behaviour that has survival value tends to produce dopamine, the “reward molecule”. In dangerous terrain, this could result in the stimulation of rhythmic movements and enhanced listening to surrounding sounds in nature. If that kind of synchronized behaviour was rewarding in dangerous environments it may as well have been rewarding for the brain in relative safety, resulting in activities such as hand- clapping, foot-stamping and yelping around the campfire. From there it is just a short step to dance and rhythm. The hormone dopamine flows when we listen to music.
USC scientists have discovered a population of neurons in the brains of juvenile songbirds that are necessary for allowing the birds to recognize the vocal sounds they are learning to imitate.

These neurons encode a memory of learned vocal sounds and form a crucial (and hitherto only theorized) part of the neural system that allows songbirds to hear, imitate and learn its species’ songs — just as human infants acquire speech sounds.
The discovery will allow scientists to uncover the exact neural mechanisms that allow songbirds to hear their own self-produced songs, compare them to the memory of the song that they are trying to imitate and then adjust their vocalizations accordingly.
Because this brain-behavior system is thought to be a model for how human infants learn to speak, understanding it could prove crucial to future understanding and treatment of language disorders in children. In both songbirds and humans, feedback of self-produced vocalizations is compared to memorized vocal sounds and progressively refined to achieve a correct imitation.
“Every neurodevelopmental disorder you can think of — including Tourette syndrome, autism and Rett syndrome — entails in some way a breakdown in auditory processing and vocal communication,” said Sarah Bottjer, senior author of an article on the research that appears in the Journal of Neuroscience on Sept. 4. “Understanding mechanisms of vocal learning at a cellular level is a huge step toward being able to someday address the biological issues behind the behavioral issues.”
Bottjer professor of neurobiology at the USC Dornsife College of Letters, Arts and Sciences, collaborated with lead author Jennifer Achiro, a graduate student at USC, to examine the activity of neurons in songbirds’ brains using electrodes to record the activity of individual neurons.
In the basal ganglia — a complex system of neurons in the brain responsible for, among other things, procedural learning — Bottjer and Achiro were able to isolate two different types of neurons in young songbirds: ones that were activated only when the birds heard themselves singing and others that were activated only when the birds heard the songs of adult birds that they were trying to imitate.
The two sets of neurons allow the songbirds to recognize both their current behavior and a goal behavior that they would like to achieve.
“The process of learning speech requires the brain to compare feedback of current vocal behavior to a memory of target vocal sounds,” Achiro said. “The discovery of these two distinct populations of neurons means that this brain region contains separate neural representation of current and goal behaviors. Now, for the first time, we can test how these two neural representations are compared so that correct matches between the two are somehow rewarded.”
The next step for scientists will be to learn how the brain rewards correct matches between feedback of current vocal behavior and the goal memory that depicts memorized vocal sounds as songbirds make progress in bringing their current behavior closer to their goal behavior, Bottjer said.
(Source: news.usc.edu)
How Birds and Babies Learn to Talk
Few things are harder to study than human language. The brains of living humans can only be studied indirectly, and language, unlike vision, has no analogue in the animal world. Vision scientists can study sight in monkeys using techniques like single-neuron recording. But monkeys don’t talk.
However, in an article published in Nature, a group of researchers, including myself, detail a discovery in birdsong that may help lead to a revised understanding of an important aspect of human language development. Almost five years ago, I sent a piece of fan mail to Ofer Tchernichovski, who had just published an article showing that, in just three or four generations, songbirds raised in isolation often developed songs typical of their species. He invited me to visit his lab, a cramped space stuffed with several hundred birds residing in souped-up climate-controlled refrigerators. Dina Lipkind, at the time Tchernichovski’s post-doctoral student, explained a method she had developed for teaching zebra finches two songs. (Ordinarily, a zebra finch learns only one song in its lifetime.) She had discovered that by switching the song of a tutor bird at precisely the right moment, a juvenile bird could learn a second, new song after it had mastered the first one.
Thinking about bilingualism and some puzzles I had encountered in my own lab, I suggested that Lipkind’s method could be useful in casting light on the question of how a creature—any creature—learns to put linguistic elements together. We mapped out an experiment that day: birds would learn one “grammar” in which every phrase followed the form of ABCABC, and then we would switch things up, giving them a new target, ACBACB (the As, Bs, and Cs were certain stereotyped chirps and peeps).
The results were thrilling: most of the birds could accomplish the task. But it was clearly difficult—it took several weeks for them to learn the new grammar—and it was challenging in a particular way. While the birds showed no sign of needing to relearn individual sounds, the connections between individual syllables, known as “transitions,” proved incredibly difficult. The birds proceeded slowly and systematically, incrementally working out each transition (e.g., from C to B, and B to A). They could not freely move syllables around, and did not engage in trial and error, either. Instead, they undertook a systematic struggle to learn particular connections between specific, individual syllables. The moment they mastered the third transition of the sequence, they were able to produce the entire grammar. Never, to my knowledge, had the process of learning any sort of grammar been so precisely articulated.
We wrote up the results, but Nature declined to publish them. Then Dina and Ofer speculated that our findings might be more convincing if they were true for not only zebra finches (hardly the Einsteins of the bird world) but for other species as well. Ofer contacted a Japanese researcher, Kazuo Okanoya, who he thought might be able to gather data for Bengalese finches, which have a more complex grammar than zebra finches. Amazingly, the Bengalese finches followed almost exactly the same learning pattern as the zebra finches.
Then we decided to test our ideas about the incrementality of vocal learning in human infants, enlisting the help of a graduate student I had been working with at N.Y.U., Doug Bemis. Bemis and Lipkind analyzed an old, publicly available set of human-babbling data, drawn from the CHILDES database, in a new way. The literature said that in the later part of the first year of life, babies undergo a change from “reduplicated” babbling—repeating a syllable, like bababa—to “variegated” babbling—often switching between syllables, like babadaga. Our birdsong results led us to wonder whether such a change might be more piecemeal than is commonly presumed, and our examination of the data proved that, in fact, the change did not happen all at once. It was gradual, with new transitions worked out one by one; human babies were stymied in the same ways that the birds were. Nobody had ever really explained why babbling took so many months; our birdsong data has finally yielded a first clue.
Today, almost five years after Lipkind and Tchernichovski began developing the methods that are at the paper’s core, the work is finally being published by Nature.
What we don’t yet know is whether the similarity between birds and babies stems from a fundamental similarity between species at the biological level. When two species do something in similar ways, it can be a matter of “homology,” a genuine lineage at the genetic level, or “analogy,” which is independent reinvention. It will likely be years before we know for sure, but there is reason to believe that our results are not purely an accident of independent invention. Some of the important genes in human vocal learning (including FOXP2, the gene thus far most decisively tied to human language) are also involved in avian vocal learning, as a new book, “Birdsong, Speech, and Language,” discusses at length.
Language will never be as easy to dissect as birdsong, but knowledge about one can inform knowledge about the other. Our brains didn’t evolve to be easily understood, but the fact that humans share so many genes with so many other species gives scientists a fighting chance.

Innate ability to vocalize: Deaf or not, courting male mice make same sounds
Scientists have long thought mice might be a model for how humans learn to vocalize. But new research led by scientists at Washington State University Vancouver has found that, unlike humans and songbirds, mice do not learn to vocalize.
The results, published in the Journal of Neuroscience, point the way to a more finely focused, genetic tool for teasing out the mysteries of speech and its disorders.
To see if mice learn to vocalize, WSU neurophysiologist Christine Portfors destroyed the ear hair cells in more than a dozen newborn male mice. The cells convert sound waves into electrical signals processed by the brain, making hearing possible.
The deaf mice were then raised with hearing mice in a normal social environment.
Portfors and her fellow researchers, including WSU graduate student Elena Mahrt, used males because they are particularly exuberant vocalizers in the presence of females.
"We can elicit vocalization behavior in males really easily by just putting them with a female,” Portfors said. "They vocalize like crazy.”
And it turned out that it didn’t matter if the mouse was deaf or not. The researchers catalogued essentially the same suite of ultrasonic sounds from both the deaf and hearing mice. “It means that they don’t need to hear to be able to produce their sounds, their vocalizations,” Portfors said. “Basically, they don’t need to hear themselves. They don’t need auditory feedback. They don’t need to learn.”
The finding means mice are out as a model to study vocal learning. However, scientists can now focus on the mouse to learn the genetic mechanism behind communication disorders.
"If you don’t have learning as a variable, you can look at the genetic control of these things,” Portfors said. "You can look at the genetic control of the output of the signal. It’s not messed up by an animal that’s been in a particular learning situation.”
(Image: Fotolia)
Roots of language in human and bird biology
The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.
These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.
"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.
Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.
From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.
Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE
Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language
(Image: iStock)
Doing the math for how songbirds learn to sing
Scientists studying how songbirds stay on key have developed a statistical explanation for why some things are harder for the brain to learn than others.
“We’ve built the first mathematical model that uses a bird’s previous sensorimotor experience to predict its ability to learn,” says Emory biologist Samuel Sober. “We hope it will help us understand the math of learning in other species, including humans.”
Sober conducted the research with physiologist Michael Brainard of the University of California, San Francisco.
Their results, showing that adult birds correct small errors in their songs more rapidly and robustly than large errors, were published in the Proceedings of the National Academy of Sciences (PNAS).
Sober’s lab uses Bengalese finches as a model for researching the mechanisms of how the brain learns to correct vocal mistakes.
The researchers wanted to quantify the relationship between the size of a vocal error, and the probability of the brain making a sensorimotor correction. The experiments were conducted on adult Bengalese finches outfitted with light-weight, miniature headphones.
As a bird sang into a microphone, the researchers used sound-processing equipment to trick the bird into thinking it was making vocal mistakes, by changing the bird’s pitch and altering the way the bird heard itself, in real-time.
“When we made small pitch shifts, the birds learned really well and corrected their errors rapidly,” Sober says. “As we made the pitch shifts bigger, the birds learned less well, until at a certain pitch, they stopped learning.”
The researchers used the data to develop a statistical model for the size of a vocal error and whether a bird learns, including the cut-off point for learning from sensorimotor mistakes. They are now developing additional experiments to test and refine the model.
“We hope that our mathematical framework for how songbirds learn to sing could help in the development of human behavioral therapies for vocal rehabilitation, as well as increase our general understanding of how the brain learns,” Sober says.

An elephant that speaks Korean
An Asian elephant named Koshik can imitate human speech, speaking words in Korean that can be readily understood by those who know the language. The elephant accomplishes this in a most unusual way: he vocalizes with his trunk in his mouth.
The elephant’s vocabulary consists of exactly five words, researchers report on November 1 in Current Biology, a Cell Press publication. Those include “annyong” (“hello”), “anja” (“sit down”), “aniya” (“no”), “nuo” (“lie down”), and “choah” (“good”). Ultimately, Koshik’s language skills may provide important insights into the biology and evolution of complex vocal learning, an ability that is critical for human speech and music, the researchers say.
"Human speech basically has two important aspects, pitch and timbre," says Angela Stoeger of the University of Vienna. "Intriguingly, the elephant Koshik is capable of matching both pitch and timbre patterns: he accurately imitates human formants as well as the voice pitch of his trainers. This is remarkable considering the huge size, the long vocal tract, and other anatomical differences between an elephant and a human."
Singing Mice Show Signs of Learning
Guys who imitate Luciano Pavarotti or Justin Bieber to get the girls aren’t alone. Male mice may do a similar trick, matching the pitch of other males’ ultrasonic serenades. The mice also have certain brain features, somewhat similar to humans and song-learning birds, which they may use to change their sounds, according to a new study.
"We are claiming that mice have limited versions of the brain and behavior traits for vocal learning that are found in humans for learning speech and in birds for learning song," said Duke neurobiologist Erich Jarvis, who oversaw the study. The results appear Oct. 10 in PLOS ONE and are further described in a review article in Brain and Language.
[Arriaga, G. et. al. (2012) “Mouse vocal communication system: are ultrasounds learned or innate?” Brain and Language]The discovery contradicts scientists’ 60-year-old assumption that mice do not have vocal learning traits at all. “If we’re not wrong, these findings will be a big boost to scientists studying diseases like autism and anxiety disorders,” said Jarvis, who is a Howard Hughes Medical Institute investigator. “The researchers who use mouse models of the vocal communication effects of these diseases will finally know the brain system that controls the mice’s vocalizations.”