Neuroscience

Articles and news from the latest research reports.

Posts tagged vocalizations

173 notes

Infant Cooing, Babbling Linked to Hearing Ability
Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.
“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”
Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.
“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”
Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.
“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”
In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.
Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Infant Cooing, Babbling Linked to Hearing Ability

Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.

“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”

Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.

“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”

Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.

“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”

In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.

Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.

Filed under hearing cochlear implant vocalizations language development psychology neuroscience science

51 notes

Shout now! ‒ How Nerve Cells Initiate Voluntary Calls

University of Tübingen neuroscientists show that monkeys can decide to call out or keep silent

image

“Should I say something or not?” Human beings are not alone in pondering this dilemma – animals also face decisions when they communicate by voice. University of Tübingen neurobiologists Dr. Steffen Hage and Professor Andreas Nieder have now demonstrated that nerve cells in the brain signal the targeted initiation of calls – forming the basis of voluntary vocal expression. Their results are published in “Nature Communications.”

When we speak, we use the sounds we make for a specific purpose – we intentionally say what we think, or consciously withhold information. Animals, however, usually make sounds according to what they feel at that moment. Even our closest relations among the primates make sounds as a reflex based on their mood. Now, Tübingen neuroscientists have shown that rhesus monkeys are able to call (or be silent) on command. They can instrumentalize the sounds they make in a targeted way, an important behavioral ability which we also use to put language to a purpose.

To find out how the neural cells in the brain catalyse the production of controled vocal noises, the researchers taught rhesus monkeys to call out quickly when a spot appeared on a computer screen. While the monkeys solved puzzles, measurements taken in their prefrontal cortex revealed astonishing reactions in the cells there. The nerve cells became active whenever the monkey saw the spot of light which was the instruction to call out. But if the monkey simply called out spontaneously, these nerve cells were not activated. The cells therefore did not signaled for just any vocalisation – only for calls that the monkey actively decided to make.

The results published in “Nature Communications” provide valuable insights into the neurobiological foundations of vocalization. “We want to understand the physiological mechanisms in the brain which lead to the voluntary production of calls,” says Dr. Steffen Hage of the Institute for Neurobiology, “because it played a key role in the evolution of human ability to use speech.” The study offers important indicators of the function of part of the brain which in humans has developed into one of the central locations for controlling speech. “Disorders in this part of the human brain lead to severe speech disorders or even complete loss of speech in the patient,” Professor Andreas Nieder explains. The results – giving insights into how the production of sound is initiated – may help us better understand speech disorders.

(Source: uni-tuebingen.de)

Filed under speech production vocalizations primates nerve cells Broca's area neuroscience science

131 notes

Primate calls, like human speech, can help infants form categories
Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.
Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.
Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.
"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.
In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.
Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.
Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”
"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.
Waxman said these new results open the door to new research questions.
"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"
(Image: Corbis)

Primate calls, like human speech, can help infants form categories

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"

(Image: Corbis)

Filed under primates vocalizations language categorization psychology neuroscience science

162 notes

Bats Can Recognize Each Other’s Voices
If bats ever used a cell phone, they could forgo the version with caller ID: The mammals can identify each other by their voices, a new study says.
Bats aren’t the only mammals to use voice recognition—people do it, too. Even in the days before caller ID, a simple “Hi, it’s me,” from a close friend or loved one was usually enough to figure out who’s on the other end. Recognizing a person by voice, however, requires previous knowledge: We can’t identify a stranger on the phone by voice alone because we have never met them before.
People can, however, discriminate between a familiar voice and an unfamiliar one, even if they’ve never met the other person. We can also distinguish between two individuals by voice alone even if we’ve never met them before.
Hanna Kastein and colleagues at the University of Veterinary Medicine in Hannover, Germany, wanted to know whether bats could perform these same tasks.
“Bats are totally interesting mammals to study voice perception since they are dependent on their vocalizations for orientation and communication due to their nocturnal lifestyle. In addition, they are socially living animals that frequently communicate acoustically with other members of their species,” Kastein said.
Besides their social lifestyles, bats and people share a number of physical characteristics. Both produce sounds using a combination of the larynx, vocal cords, and nasal cavities. These structures work together with an animal’s physical makeup to produce an individual’s unique voice.
“In stressful situations, voices become higher pitched, or ‘squeaky,’ in bats as in humans. Also, each individual bat has a slightly different morphology, and thus its voice sounds different from any other individual, just as voices in humans differ individually,” Kastein said.
You Had Me at Hello
Kastein and colleagues wanted to know whether bats could use vocal calls to identify individuals with which they shared a roost, and whether they could use these same calls to distinguish between two different individuals.
The researchers worked with the greater false vampire bat (Megaderma lyra) because the species has a rich array of calls that it uses in several contexts.
The team observed two groups of bats kept in separate artificial roosts for two months. They hypothesized that bats that had the most body contact while roosting would form the closest relationships. Kastein and colleagues then recorded various vocal calls from both groups of bats.
When Kastein played the recording of a vocal call over a loudspeaker, bats in both roosts universally turned their heads toward the speaker regardless of whether the call was from a bat with which they had close body contact, a bat from the same roost, or a bat from the other roost.
Given that the artificial roosts had much lower rates of vocal calls, due to the lack of stimuli, the researchers thought that this response could be due to the novelty of hearing any type of vocalization.
Discriminating Bat
So the team did a second set of experiments in which they had a bat listen to the call of their “friend” until the call didn’t create any type of behavioral response, such as turning the head. This means the listening bat had become habituated to the call, according to the study, published recently in the journal Animal Cognition.
Then, the scientists alternated playing a vocalization of the bat friend with that of an unfamiliar bat. The listening bats were significantly more likely to turn their heads toward the call of their friend—indicating both that they recognized their friend and that they could distinguish between individual vocalizations.
“In our study, we found that the … false vampire bat is able to discriminate between different voices, including both known or unknown individuals,” Kastein noted.
“However, to what extent bats are able to label an unknown bat as unknown, we cannot say.” She suspects that in real life, recognizing other bats by their voices is aided by smell and, to a lesser extent, vision.

Bats Can Recognize Each Other’s Voices

If bats ever used a cell phone, they could forgo the version with caller ID: The mammals can identify each other by their voices, a new study says.

Bats aren’t the only mammals to use voice recognition—people do it, too. Even in the days before caller ID, a simple “Hi, it’s me,” from a close friend or loved one was usually enough to figure out who’s on the other end. Recognizing a person by voice, however, requires previous knowledge: We can’t identify a stranger on the phone by voice alone because we have never met them before.

People can, however, discriminate between a familiar voice and an unfamiliar one, even if they’ve never met the other person. We can also distinguish between two individuals by voice alone even if we’ve never met them before.

Hanna Kastein and colleagues at the University of Veterinary Medicine in Hannover, Germany, wanted to know whether bats could perform these same tasks.

“Bats are totally interesting mammals to study voice perception since they are dependent on their vocalizations for orientation and communication due to their nocturnal lifestyle. In addition, they are socially living animals that frequently communicate acoustically with other members of their species,” Kastein said.

Besides their social lifestyles, bats and people share a number of physical characteristics. Both produce sounds using a combination of the larynx, vocal cords, and nasal cavities. These structures work together with an animal’s physical makeup to produce an individual’s unique voice.

“In stressful situations, voices become higher pitched, or ‘squeaky,’ in bats as in humans. Also, each individual bat has a slightly different morphology, and thus its voice sounds different from any other individual, just as voices in humans differ individually,” Kastein said.

You Had Me at Hello

Kastein and colleagues wanted to know whether bats could use vocal calls to identify individuals with which they shared a roost, and whether they could use these same calls to distinguish between two different individuals.

The researchers worked with the greater false vampire bat (Megaderma lyra) because the species has a rich array of calls that it uses in several contexts.

The team observed two groups of bats kept in separate artificial roosts for two months. They hypothesized that bats that had the most body contact while roosting would form the closest relationships. Kastein and colleagues then recorded various vocal calls from both groups of bats.

When Kastein played the recording of a vocal call over a loudspeaker, bats in both roosts universally turned their heads toward the speaker regardless of whether the call was from a bat with which they had close body contact, a bat from the same roost, or a bat from the other roost.

Given that the artificial roosts had much lower rates of vocal calls, due to the lack of stimuli, the researchers thought that this response could be due to the novelty of hearing any type of vocalization.

Discriminating Bat

So the team did a second set of experiments in which they had a bat listen to the call of their “friend” until the call didn’t create any type of behavioral response, such as turning the head. This means the listening bat had become habituated to the call, according to the study, published recently in the journal Animal Cognition.

Then, the scientists alternated playing a vocalization of the bat friend with that of an unfamiliar bat. The listening bats were significantly more likely to turn their heads toward the call of their friend—indicating both that they recognized their friend and that they could distinguish between individual vocalizations.

“In our study, we found that the … false vampire bat is able to discriminate between different voices, including both known or unknown individuals,” Kastein noted.

“However, to what extent bats are able to label an unknown bat as unknown, we cannot say.” She suspects that in real life, recognizing other bats by their voices is aided by smell and, to a lesser extent, vision.

Filed under bats voice recognition voice perception vocalizations cognition psychology neuroscience science

104 notes

Hearing What’s Important: Penn Researchers Pinpoint Brain Mechanisms That Make the Auditory System Sensitive to Behaviorally Relevant Sounds
How do we hear?  More specifically, how does the auditory center of the brain discern important sounds – such as communication from members of the same species – from relatively irrelevant background noise? The answer depends on the regulation of sound by specific neurons in the auditory cortex of the brain, but the precise mechanisms of those neurons have remained unclear. Now, a new study from the Perelman School of Medicine at the University of Pennsylvania has isolated how neurons in the rat’s primary auditory cortex (A1) preferentially respond to natural vocalizations from other rats over intentionally modified vocalizations (background sounds). A computational model developed by the study authors, which successfully predicted neuronal responses to other new sounds, explained the basis for this preference. The research is published in the Journal of Neurophysiology.
Rats communicate with each other mostly through ultrasonic vocalizations (USVs) beyond the range of human hearing. Although the existence of these USV conversations has been known for decades, “the acoustic richness of them has only been discovered in the last few years,” said senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery at Penn. That acoustical complexity raises questions as to how the animal brain recognizes and responds to the USVs. “We set out to characterize the responses of neurons to USVs and to come up with a model that would explain the mechanism that makes these neurons preferentially responsive to these relevant sounds.”
Geffen and her colleagues obtained recordings of USVs from two rats kept together in a cage, then played the recordings to a separate group of male rats, while their neuronal responses were acquired and recorded. The researchers also used USV recordings that were modified in several ways, such as having background sounds filtered out and being played backwards and at different speeds to mimic unimportant background noise. “We found that neurons in the auditory cortex respond strongly and selectively to the original ultrasonic vocalizations and not the transformed versions we created,” says Geffen.
Using the data collected on the responses of A1 neurons to various USVs, the researchers developed a computational model that could predict the activity of an individual neuron based on the pitch and duration of the USV. Geffen observes that “the details of their responses could be predicted with high accuracy.” It was possible to determine which aspects of the acoustic input best drove individual neurons. Remarkably, it turned out that the acoustic parameters that worked best in driving the neuronal responses corresponded to the statistics of the natural vocalizations rats produce.
The work makes clear for the first time, says Geffen, “the mechanisms of how the auditory system picks out behaviorally relevant sounds, such as same species communication signals, and processes them more effectively than less relevant sounds. This information is fundamental in understanding how sound perception helps animals survive. We conclude that neurons in the auditory cortex are specialized for processing and efficiently responding to natural and behaviorally relevant sounds.”
(Image: National Institute on Deafness and Other Communication)

Hearing What’s Important: Penn Researchers Pinpoint Brain Mechanisms That Make the Auditory System Sensitive to Behaviorally Relevant Sounds

How do we hear?  More specifically, how does the auditory center of the brain discern important sounds – such as communication from members of the same species – from relatively irrelevant background noise? The answer depends on the regulation of sound by specific neurons in the auditory cortex of the brain, but the precise mechanisms of those neurons have remained unclear. Now, a new study from the Perelman School of Medicine at the University of Pennsylvania has isolated how neurons in the rat’s primary auditory cortex (A1) preferentially respond to natural vocalizations from other rats over intentionally modified vocalizations (background sounds). A computational model developed by the study authors, which successfully predicted neuronal responses to other new sounds, explained the basis for this preference. The research is published in the Journal of Neurophysiology.

Rats communicate with each other mostly through ultrasonic vocalizations (USVs) beyond the range of human hearing. Although the existence of these USV conversations has been known for decades, “the acoustic richness of them has only been discovered in the last few years,” said senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery at Penn. That acoustical complexity raises questions as to how the animal brain recognizes and responds to the USVs. “We set out to characterize the responses of neurons to USVs and to come up with a model that would explain the mechanism that makes these neurons preferentially responsive to these relevant sounds.”

Geffen and her colleagues obtained recordings of USVs from two rats kept together in a cage, then played the recordings to a separate group of male rats, while their neuronal responses were acquired and recorded. The researchers also used USV recordings that were modified in several ways, such as having background sounds filtered out and being played backwards and at different speeds to mimic unimportant background noise. “We found that neurons in the auditory cortex respond strongly and selectively to the original ultrasonic vocalizations and not the transformed versions we created,” says Geffen.

Using the data collected on the responses of A1 neurons to various USVs, the researchers developed a computational model that could predict the activity of an individual neuron based on the pitch and duration of the USV. Geffen observes that “the details of their responses could be predicted with high accuracy.” It was possible to determine which aspects of the acoustic input best drove individual neurons. Remarkably, it turned out that the acoustic parameters that worked best in driving the neuronal responses corresponded to the statistics of the natural vocalizations rats produce.

The work makes clear for the first time, says Geffen, “the mechanisms of how the auditory system picks out behaviorally relevant sounds, such as same species communication signals, and processes them more effectively than less relevant sounds. This information is fundamental in understanding how sound perception helps animals survive. We conclude that neurons in the auditory cortex are specialized for processing and efficiently responding to natural and behaviorally relevant sounds.”

(Image: National Institute on Deafness and Other Communication)

Filed under auditory cortex auditory system neurons vocalizations ultrasonic vocalizations neuroscience science

159 notes

Songbirds’ brains coordinate singing with intricate timing
As a bird sings, some neurons in its brain prepare to make the next sounds while others are synchronized with the current notes—a coordination of physical actions and brain activity that is needed to produce complex movements, new research at the University of Chicago shows.
In an article in the current issue of Nature, neuroscientist Daniel Margoliash and colleagues show, for the first time, how the brain is organized to govern skilled performance—a finding that may lead to new ways of understanding human speech production.
The new study shows that birds’ physical movements actually are made up of a multitude of smaller actions. “It is amazing that such small units of movements are encoded, and so precisely, at the level of the forebrain,” said Margoliash, a professor of organismal biology and anatomy and psychology at UChicago.
“This work provides new insight into how the physics of controlling vocal signals are represented in the brain to control vocalizations,” said Howard Nusbaum, a professor of psychology at UChicago and an expert on speech.
By decoding the neural representation of communication, Nusbaum explained, the research may shed light on speech problems such as stuttering or aphasia (a disorder following a stroke). And it offers an unusual window into how the brain and body carry out other kinds of complex movement, from throwing a ball to doing a backflip.
“A big question in muscle control is how the motor system organizes the dynamics of movement,” said Margoliash. Movements like reaching or grasping are difficult to study because they entail many variables, such as the angles of the shoulder, elbow, wrist and fingers; the forces of many muscles; and how these change over time,” he said.
"With all this complexity, it has been difficult to determine which of the many variables that describe movements are represented in the brain, and which of those are used to control movements," he said.
"It’s difficult to find a natural framework with which to analyze the activity of single neurons. The bird study provided us a perfect opportunity,” Margoliash said. Margoliash is a pioneer in the study of brain function in birds, with studies that include how learning occurs when a bird sleeps and recalls singing a song.

Songbirds’ brains coordinate singing with intricate timing

As a bird sings, some neurons in its brain prepare to make the next sounds while others are synchronized with the current notes—a coordination of physical actions and brain activity that is needed to produce complex movements, new research at the University of Chicago shows.

In an article in the current issue of Nature, neuroscientist Daniel Margoliash and colleagues show, for the first time, how the brain is organized to govern skilled performance—a finding that may lead to new ways of understanding human speech production.

The new study shows that birds’ physical movements actually are made up of a multitude of smaller actions. “It is amazing that such small units of movements are encoded, and so precisely, at the level of the forebrain,” said Margoliash, a professor of organismal biology and anatomy and psychology at UChicago.

“This work provides new insight into how the physics of controlling vocal signals are represented in the brain to control vocalizations,” said Howard Nusbaum, a professor of psychology at UChicago and an expert on speech.

By decoding the neural representation of communication, Nusbaum explained, the research may shed light on speech problems such as stuttering or aphasia (a disorder following a stroke). And it offers an unusual window into how the brain and body carry out other kinds of complex movement, from throwing a ball to doing a backflip.

“A big question in muscle control is how the motor system organizes the dynamics of movement,” said Margoliash. Movements like reaching or grasping are difficult to study because they entail many variables, such as the angles of the shoulder, elbow, wrist and fingers; the forces of many muscles; and how these change over time,” he said.

"With all this complexity, it has been difficult to determine which of the many variables that describe movements are represented in the brain, and which of those are used to control movements," he said.

"It’s difficult to find a natural framework with which to analyze the activity of single neurons. The bird study provided us a perfect opportunity,” Margoliash said. Margoliash is a pioneer in the study of brain function in birds, with studies that include how learning occurs when a bird sleeps and recalls singing a song.

Filed under songbirds brain activity vocalizations communication motor system speech production neuroscience science

102 notes

Roots of language in human and bird biology
The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.
These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.
"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.
Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.
From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.
Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE
Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language
(Image: iStock)

Roots of language in human and bird biology

The genes activated for human speech are similar to the ones used by singing songbirds, new experiments suggest.

These results, which are not yet published, show that gene products produced for speech in the cortical and basal ganglia regions of the human brain correspond to similar molecules in the vocal communication areas of the brains of zebra finches and budgerigars. But these molecules aren’t found in the brains of doves and quails — vocal birds that do not learn their sounds.

"The results suggest that similar behavior and neural connectivity for a convergent complex trait like speech and song are associated with many similar genetic changes," said Duke neurobiologist Erich Jarvis, a Howard Hughes Medical Institute investigator.

Jarvis studies the molecular pathways that songbirds use while learning to sing. In past experiments, he and his collaborators found that songbirds have a connection between the front part of their brain and nerves in the brainstem that control movement in muscles that make songs in birds. They’ve seen this circuit in a more primitive form related to ultrasonic mating calls in mice. Humans also have this motor learning pathway for speech.

From this and other work, Jarvis developed the motor theory for the origin of vocal learning, which describes how ancient brain systems used to control movement and motor learning evolved into brain systems for learning and producing song and spoken language.

Gustavo Arriaga, Eric P. Zhou, Erich D. Jarvis. Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds. PLoS ONE

Gustavo Arriaga, Erich D. Jarvis. Mouse vocal communication system: Are ultrasounds learned or innate? Brain and Language

(Image: iStock)

Filed under language language production speech vocalizations songbirds vocal learning neuroscience science

11 notes


How Low Can You Go? Physical Production Mechanism of Elephant Infrasonic Vocalizations
Elephants can communicate using sounds below the range of human hearing (“infrasounds” below 20 hertz). It is commonly speculated that these vocalizations are produced in the larynx, either by neurally controlled muscle twitching (as in cat purring) or by flow-induced self-sustained vibrations of the vocal folds (as in human speech and song). We used direct high-speed video observations of an excised elephant larynx to demonstrate flow-induced self-sustained vocal fold vibration in the absence of any neural signals, thus excluding the need for any “purring” mechanism. The observed physical principles of voice production apply to a wide variety of mammals, extending across a remarkably large range of fundamental frequencies and body sizes, spanning more than five orders of magnitude.

Read more here

How Low Can You Go? Physical Production Mechanism of Elephant Infrasonic Vocalizations

Elephants can communicate using sounds below the range of human hearing (“infrasounds” below 20 hertz). It is commonly speculated that these vocalizations are produced in the larynx, either by neurally controlled muscle twitching (as in cat purring) or by flow-induced self-sustained vibrations of the vocal folds (as in human speech and song). We used direct high-speed video observations of an excised elephant larynx to demonstrate flow-induced self-sustained vocal fold vibration in the absence of any neural signals, thus excluding the need for any “purring” mechanism. The observed physical principles of voice production apply to a wide variety of mammals, extending across a remarkably large range of fundamental frequencies and body sizes, spanning more than five orders of magnitude.

Read more here

Filed under science neuroscience animals mammals vocalizations larynx infrasounds vocals voice production

free counters