Posts tagged auditory system

Posts tagged auditory system
The brain’s got rhythm: Extracting temporal patterns from visual input
To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.
Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.
Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.
Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:
1. Hub neurons, through their massive connections to others, induce synchronous firing of the network
2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm
3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods
4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information
5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule
The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”
Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”
In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.
Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.
Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”
How and when the auditory system registers complex auditory-visual synchrony
Imagine the brain’s delight when experiencing the sounds of Beethoven’s “Moonlight Sonata” while simultaneously taking in a light show produced by a visualizer.
A new Northwestern University study did much more than that.
To understand how the brain responds to highly complex auditory-visual stimuli like music and moving images, the study tracked parts of the auditory system involved in the perceptual processing of “Moonlight Sonata” while it was synchronized with the light show made by the iTunes Jelly visualizer.
The study shows how and when the auditory system encodes auditory-visual synchrony between complex and changing sounds and images.
Much of related research looks at how the brain processes simple sounds and images. Locating a woodpecker in a tree, for example, is made easier when your brain combines the auditory (pecking) and visual (movement of the bird) streams and judges that they are synchronous. If they are, the brain decides that the two sensory inputs probably came from a single source.
While that research is important, Julia Mossbridge, lead author of the study and research associate in psychology at Northwestern, said it also is critical to expand investigations to highly complex stimuli like music and movies.
“These kinds of things are closer to what the brain actually has to manage to process in every moment of the day,” she said. “Further, it’s important to determine how and when sensory systems choose to combine stimuli across their boundaries.
“If someone’s brain is mis-wired, sensory information could combine when it’s not appropriate,” she said. “For example, when that person is listening to a teacher talk while looking out a window at kids playing, and the auditory and visual streams are integrated instead of separated, this could result in confusion and misunderstanding about which sensory inputs go with what experience.”
It was already known that the left auditory cortex is specialized to process sounds with precise, complex and rapid timing; this gift for auditory timing may be one reason that in most people, the left auditory cortex is used to process speech, for which timing is critical. The results of this study show that this specialization for timing applies not just to sounds, but to the timing of complex and dynamic sounds and images.
Previous research indicates that there are multi-sensory areas in the brain that link sounds and images when they change in similar ways, but much of this research is focused particularly on speech signals (e.g., lips moving as vowels and consonants are heard). Consequently, it hasn’t been clear what areas of the brain process more general auditory-visual synchrony or how this processing differs when sounds and images should not be combined.
“It appears that the brain is exploiting the left auditory cortex’s gift at processing auditory timing, and is using similar mechanisms to encode auditory-visual synchrony, but only in certain situations; seemingly only when combining the sounds and images is appropriate,” Mossbridge said.
Study suggests musical training could possibly sharpen language processing

People who are better able to move to a beat show more consistent brain responses to speech than those with less rhythm, according to a study published in the September 18 issue of The Journal of Neuroscience. The findings suggest that musical training could possibly sharpen the brain’s response to language.
Scientists have long known that moving to a steady beat requires synchronization between the parts of the brain responsible for hearing and movement. In the current study, Professor Nina Kraus, PhD, and colleagues at Northwestern University examined the relationship between the ability to keep a beat and the brain’s response to sound.
More than 100 teenagers from the Chicago area participated in the Kraus Lab study, where they were instructed to listen and tap their finger along to a metronome. The teens’ tapping accuracy was computed based on how closely their taps aligned in time with the “tic-toc” of the metronome. In a second test, the researchers used a technique called electroencephalography (EEG) to record brainwaves from a major brain hub for sound processing as the teens listened to the synthesized speech sound “da” repeated periodically over a 30-minute period. The researchers then calculated how similarly the nerve cells in this region responded each time the “da” sound was repeated.
“Across this population of adolescents, the more accurate they were at tapping along to the beat, the more consistent their brains’ response to the ‘da’ syllable was,” Kraus said. Because previous studies show a link between reading ability and beat-keeping ability as well as reading ability and the consistency of the brain’s response to sound, Kraus explained that these new findings show that hearing is a common basis for these associations.
“Rhythm is inherently a part of music and language,” Kraus said. “It may be that musical training, with an emphasis on rhythmic skills, exercises the auditory-system, leading to strong sound-to-meaning associations that are so essential in learning to read.”
John Iversen, PhD, who studies how the brain processes music at the University of California, San Diego, and was not involved with this study, noted that the findings raise the possibility that musical training may have important impacts on the brain.“This study adds another piece to the puzzle in the emerging story suggesting that musical rhythmic abilities are correlated with improved performance in non-music areas, particularly language,” he said.
Kraus’ group is now working on a multi-year study to evaluate the effects of musical training on beat synchronization, response consistency, and reading skills in a group of children engaging in musical training.
(Source: alphagalileo.org)

Brain Wiring Quiets the Voice Inside Your Head
Researchers find nerve circuits connecting motion and hearing
During a normal conversation, your brain is constantly adjusting the volume to soften the sound of your own voice and boost the voices of others in the room. This ability to distinguish between the sounds generated from your own movements and those coming from the outside world is important not only for catching up on water cooler gossip, but also for learning how to speak or play a musical instrument.
Now, researchers have developed the first diagram of the brain circuitry that enables this complex interplay between the motor system and the auditory system to occur.
The research, which appears Sept. 4 in The Journal of Neuroscience, could lend insight into schizophrenia and mood disorders that arise when this circuitry goes awry and individuals hear voices other people do not hear.
"Our finding is important because it provides the blueprint for understanding how the brain communicates with itself, and how that communication can break down to cause disease," said Richard Mooney, Ph.D., senior author of the study and professor of neurobiology at Duke University School of Medicine. "Normally, motor regions would warn auditory regions that they are making a command to speak, so be prepared for a sound. But in psychosis, you can no longer distinguish between the activity in your motor system and somebody else’s, and you think the sounds coming from within your own brain are external."
Researchers have long surmised that the neuronal circuitry conveying movement — to voice an opinion or hit a piano key — also feeds into the wiring that senses sound. But the nature of the nerve cells that provided that input, and how they functionally interacted to help the brain anticipate the impending sound, was not known.
In this study, Mooney used a technology created by Fan Wang, Ph.D., associate professor of cell biology at Duke, to trace all of the inputs into the auditory cortex — the sound-interpreting region of the brain. Though the researchers found that a number of different areas of the brain fed into the auditory cortex, they were most interested in one region called the secondary motor cortex, or M2, because it is responsible for sending motor signals directly into the brain stem and the spinal cord.
"That suggests these neurons are providing a copy of the motor command directly to the auditory system," said David M. Schneider, Ph.D., co-lead author of the study and a postdoctoral fellow in Mooney’s lab. "In other words,they send a signal that says âmove,â but they also send a signal to the auditory system saying ‘I am going to move.’"
Having discovered this connection, the researchers then explored what type of influence this interaction was having on auditory processing or hearing. They took slices of brain tissue from mice and specifically manipulated the neurons that led from the M2 region to the auditory cortex. The researchers found that stimulating those neurons actually dampened the activity of the auditory cortex.
"It jibed nicely with our expectations," said Anders Nelson, co-lead author of the study and a graduate student in Mooney’s lab. "It is the brain’s way of muting or suppressing the sounds that come from our own actions."
Finally, the researchers tested this circuitry in live animals, artificially turning on the motor neurons in anesthetized mice and then looking to see how the auditory cortex responded. Mice usually sing to each other through a kind of song called ultrasonic vocalizations, which are too high-pitched for a human to hear. The researchers played back these ultrasonic vocalizations to the mice after they had activated the motor cortex and found that the neurons became much less responsive to the sounds.
"It appears that the functional role that these neurons play on hearing is they make sounds we generate seem quieter," said Mooney. "The question we now want to know is if this is the mechanism that is being used when an animal is actually moving. That is the missing link, and the subject of our ongoing experiments."
Once the researchers have pinned down the basics of the circuitry, they could begin to investigate whether altering this circuitry could induce auditory hallucinations or perhaps even take them away in models of schizophrenia.
Single tone alerts brain to complete sound pattern
The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team from Utrecht and Nijmegen published an article on the subject in the journal PNAS on 2 September.
We hear with our brain. The cochlea picks up sound vibrations but the signals produced as a result are processed by the brain, using known patterns. If, for example, you briefly hear a weak tone, your hearing focuses on that tone and suppresses any frequencies around it. This makes it easier to notice any relevant sounds in your surroundings. The present research has shown that this ‘auditory attention filter’ is much more complex than believed until now: frequencies that have an octave relationship with the target tone are also heard better.
John van Opstal, professor of Biophysics at Radboud University: ‘This test proves that the brain prepares for a more extensive pattern of tones, even if the person just hears a single test tone or if he has a tone in mind. These extra tones in the pattern were not sounded during the experiment, but the brain complements the information received from the cochlea. This is scientifically interesting. Audiology, for example, at present places great emphasis on the cochlea.’
Octave relationship
The subjects undergoing the experiment did not have an easy time. For an hour they listened to unstructured noise containing very soft tones that they had to detect. Every few seconds they were presented with a tone of 1000 Hz, the cue. Then during one of two time intervals, a very quiet, short second tone was sounded. The subject had to indicate in which of the two intervals they had heard the second tone. It became apparent that tones having an octave relationship with the cue were all heard better, and those around the cue were heard less well. An octave is a well-known term in music, indicating the distance between two tones, the frequencies of which have a 2-to-1 relationship.
Voice
Van Opstal: ‘We wanted to gather data on the auditory attention filter around the target tone. When we made the range larger than other researchers had done previously, more peaks suddenly appeared. This was a complete surprise to us. One possible explanation could be that the hearing system has evolved in order to hear sounds made by members of an animal’s own species (voices in the case of humans) in noisy surroundings. Vocalisations always consist of harmonic complexes of several simultaneous tones having an octave relationship with each other.’
Hearing aid
The researchers, who work at Utrecht University, the UMC Utrecht Brain Center and Radboud University Nijmegen, can easily think up applications for this fundamental research. If, for example, someone no longer hears high tones because of damage to the cochlear hair cells, the hearing aid can be adjusted in such a way that it converts those tones so they sound one or more octaves lower. Since the brain itself ‘fills in’ tones with an octave relationship, that person’s perception should then become more normal. It is also important for commercial sound producers to know how tones are perceived. That is why Philips Research is involved in this research in their department ‘Brain, Body and Behavior’.

UI study shows fruit fly is ideal model to study hearing loss in people
If your attendance at too many rock concerts has impaired your hearing, listen up.
University of Iowa researchers say that the common fruit fly, Drosophila melanogaster, is an ideal model to study hearing loss in humans caused by loud noise. The reason: The molecular underpinnings to its hearing are roughly the same as with people.
As a result, scientists may choose to use the fruit fly to quicken the pace of research into the cause of noise-induced hearing loss and potential treatment for the condition, according to a paper published this week in the online Early Edition of the journal Proceedings of the National Academy of Sciences.
“As far as we know, this is the first time anyone has used an insect system as a model for NIHL (noise-induced hearing loss),” says Daniel Eberl, UI biology professor and corresponding author on the study.
Hearing loss caused by loud noise encountered in an occupational or recreational setting is an expensive and growing health problem, as young people use ear buds to listen to loud music and especially as the aging Baby Boomer generation enters retirement. Despite this trend, “the molecular and physiological models involved in the problem or the recovery are not fully understood,” Eberl notes.
Enter the fruit fly as an unlikely proxy for researchers to learn more about how loud noises can damage the human ear. Eberl and Kevin Christie, lead author on the paper and a post-doctoral researcher in biology, say they were motivated by the prospect of finding a model that may hasten the day when medical researchers can fully understand the factors involved in noise-induced hearing loss and how to alleviate the problem. The study arose from a pilot project conducted by UI undergraduate student Wes Smith, in Eberl’s lab.
“The fruit fly model is superior to other models in genetic flexibility, cost, and ease of testing,” Christie says.
The fly uses its antenna as its ear, which resonates in response to courtship songs generated by wing vibration. The researchers exposed a test group of flies to a loud, 120 decibel tone that lies in the center of a fruit fly’s range of sounds it can hear. This over-stimulated their auditory system, similar to exposure at a rock concert or to a jack hammer. Later, the flies’ hearing was tested by playing a series of song pulses at a naturalistic volume, and measuring the physiological response by inserting tiny electrodes into their antennae. The fruit flies receiving the loud tone were found to have their hearing impaired relative to the control group.
When the flies were tested again a week later, those exposed to noise had recovered normal hearing levels. In addition, when the structure of the flies’ ears was examined in detail, the researchers discovered that nerve cells of the noise-rattled flies showed signs that they had been exposed to stress, including altered shapes of the mitochondria, which are responsible for generating most of a cell’s energy supply. Flies with a mutation making them susceptible to stress not only showed more severe reductions in hearing ability and more prominent changes in mitochondria shape, they still had deficits in hearing 7 days later, when normal flies had recovered.
The effect on the molecular underpinnings of the fruit fly’s ear are the same as experienced by humans, making the tests generally applicable to people, the researchers note.
“We found that fruit flies exhibit acoustic trauma effects resembling those found in vertebrates, including inducing metabolic stress in sensory cells,” Eberl says. “Our report is the first to report noise trauma in Drosophila and is a foundation for studying molecular and genetic conditions resulting from NIHL.”
“We hope eventually to use the system to look at how genetic pathways change in response to NIHL. Also, we would like to learn how the modification of genetic pathways might reduce the effects of noise trauma,” Christie adds.

Brain picks out salient sounds from background noise by tracking frequency and time
New research reveals how our brains are able to pick out important sounds from the noisy world around us. The findings, published online today in the journal ‘eLife’, could lead to new diagnostic tests for hearing disorders.
Our ears can effortlessly pick out the sounds we need to hear from a noisy environment - hearing our mobile phone ringtone in the middle of the Notting Hill Carnival, for example - but how our brains process this information (the so-called ‘cocktail party problem’) has been a longstanding research question in hearing science.
Researchers have previously investigated this using simple sounds such as two tones of different pitches, but now researchers at UCL and Newcastle University have used complicated sounds that are more representative of those we hear in real life. The team used ‘machine-like beeps’ that overlap in both frequency and time to recreate a busy sound environment and obtain new insights into how the brain solves this problem.
In the study, groups of volunteers were asked to identify target sounds from within this noisy background in a series of experiments.
Sundeep Teki, a PhD student from the Wellcome Trust Centre for Neuroimaging at UCL and joint first author of the study, said: “Participants were able to detect complex target sounds from the background noise, even when the target sounds were delivered at a faster rate or there was a loud disruptive noise between them.”
Dr Maria Chait, a senior lecturer at UCL Ear Institute and joint first author on the study, adds: “Previous models based on simple tones suggest that people differentiate sounds based on differences in frequency, or pitch. Our findings show that time is also an important factor, with sounds grouped as belonging to one object by virtue of being correlated in time.”
Professor Tim Griffiths, Professor of Cognitive Neurology at Newcastle University and lead researcher on the study, said: “Many hearing disorders are characterised by the loss of ability to detect speech in noisy environments. Disorders like this that are caused by problems with how the brain interprets sound information, rather than physical damage to the ear and hearing machinery, remain poorly understood.
"These findings inform us about a fundamental brain mechanism for detecting sound patterns and identifies a process that can go wrong in hearing disorders. We now have an opportunity to create better tests for these types of hearing problems."
Whether you’re reading the paper or thinking through your schedule for the day, chances are that you’re hearing yourself speak even if you’re not saying words out loud. This internal speech — the monologue you “hear” inside your head — is a ubiquitous but largely unexamined phenomenon. A new study looks at a possible brain mechanism that could explain how we hear this inner voice in the absence of actual sound.
In two experiments, researcher Mark Scott of the University of British Columbia found evidence that a brain signal called corollary discharge — asignal that helps us distinguish the sensory experiences we produce ourselves from those produced by external stimuli — plays an important role in our experiences of internal speech.
The findings from the two experiments are published in Psychological Science, a journal of the Association for Psychological Science.
Corollary discharge is a kind of predictive signal generated by the brain that helps to explain, for example, why other people can tickle us but we can’t tickle ourselves. The signal predicts our own movements and effectively cancels out the tickle sensation.
And the same mechanism plays a role in how our auditory system processes speech. When we speak, an internal copy of the sound of our voice is generated in parallel with the external sound we hear.
“We spend a lot of time speaking and that can swamp our auditory system, making it difficult for us to hear other sounds when we are speaking,” Scott explains. “By attenuating the impact our own voice has on our hearing — using the ‘corollary discharge’ prediction — our hearing can remain sensitive to other sounds.”
Scott speculated that the internal copy of our voice produced by corollary discharge can be generated even when there isn’t any external sound, meaning that the sound we hear when we talk inside our heads is actually the internal prediction of the sound of our own voice.
If corollary discharge does in fact underlie our experiences of inner speech, he hypothesized, then the sensory information coming from the outside world should be cancelled out by the internal copy produced by our brains if the two sets of information match, just like when we try to tickle ourselves.
And this is precisely what the data showed. The impact of an external sound was significantly reduced when participants said a syllable in their heads that matched the external sound. Their performance was not significantly affected, however, when the syllable they said in their head didn’t match the one they heard.
These findings provide evidence that internal speech makes use of a system that is primarily involved in processing external speech, and may help shed light on certain pathological conditions.
“This work is important because this theory of internal speech is closely related to theories of the auditory hallucinations associated with schizophrenia,” Scott concludes.
Scientists discover the origin of a giant synapse
Humans and most mammals can determine the spatial origin of sounds with remarkable acuity. We use this ability all the time—crossing the street; locating an invisible ringing cell phone in a cluttered bedroom. To accomplish this small daily miracle, the brain has developed a circuit that’s rapid enough to detect the tiny lag that occurs between the moment the auditory information reaches one of our ears, and the moment it reaches the other. The mastermind of this circuit is the “Calyx of Held,” the largest known synapse in the brain. EPFL scientists have revealed the role that a certain protein plays in initiating the growth of these giant synapses.
The discovery, published in Nature Neuroscience, could also help shed light on a number of neuropsychiatric disorders.
Enormous synapses enable faster communication
Ordinarily, neurons have thousands of contact points – known as synapses - with neighboring neurons. Within a given time frame, a neuron has to receive several signals from its neighbors in order to be able to fire its own signal in response. Because of this, information passes from neuron to neuron in a relatively random manner.
In the auditory part of the brain, this is not the case. Synapses often grow to extremely large sizes, and these behemoths are known as “Calyx of Held” synapses. Because they have hundreds of contact points, they are capable of transmitting a signal singlehandedly to a neighboring neuron. “It’s almost like peer-to-peer communication between neurons,” explains EPFL professor Ralf Schneggenburger, who led the study. The result is that information is processed extremely quickly, in a few fractions of a millisecond, instead of the slower pace of more than 10 milliseconds that occurs in most other neuronal circuits.
Identifying the protein
To isolate the protein responsible for controlling the growth of this gigantic synapse, the scientists had to perform painstaking research. Using methods for analyzing gene expression in mice, they identified several members of the “BMP” family of proteins from among more than 20,000 possible candidates.
To verify that they had truly identified the right protein, the researchers disabled BMP protein receptors in the auditory part of a mouse brain. “The resulting electrophysiological signal of the Calyx of Held was significantly altered,” explains Le Xiao, first author on the study. “This would suggest a large anatomical difference.”
The scientists then reconstructed the synapses in three dimensions from slices that were observed under an electron microscope. Instead of a single, massive Calyx of Held, which would encompass nearly half the neuron, the 3D image of the neuron clearly shows several, smaller synapses. “This shows that the process involving the BMP protein not only causes that one synapse to grow, but also performs a selection, by eliminating the others,” says Schneggenburger.
Synaptic connectivity, the key to many psychiatric puzzles
The impact of this study will go well beyond increasing our understanding of the auditory system. The results suggest that the BMP protein plays an important role in developing connectivity in the brain. Schneggenburger and his colleagues are currently investigating its role elsewhere in the brain. “Some neuropsychiatric disorders, such as schizophrenia and autism, are characterized by the abnormal development of synaptic connectivity in certain key parts of the brain,” explains Schneggenburger. By identifying and explaining the role of various proteins in this process, the scientists hope to be able to shed more light on these poorly understood disorders.

How do we hear? More specifically, how does the auditory center of the brain discern important sounds – such as communication from members of the same species – from relatively irrelevant background noise? The answer depends on the regulation of sound by specific neurons in the auditory cortex of the brain, but the precise mechanisms of those neurons have remained unclear. Now, a new study from the Perelman School of Medicine at the University of Pennsylvania has isolated how neurons in the rat’s primary auditory cortex (A1) preferentially respond to natural vocalizations from other rats over intentionally modified vocalizations (background sounds). A computational model developed by the study authors, which successfully predicted neuronal responses to other new sounds, explained the basis for this preference. The research is published in the Journal of Neurophysiology.
Rats communicate with each other mostly through ultrasonic vocalizations (USVs) beyond the range of human hearing. Although the existence of these USV conversations has been known for decades, “the acoustic richness of them has only been discovered in the last few years,” said senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery at Penn. That acoustical complexity raises questions as to how the animal brain recognizes and responds to the USVs. “We set out to characterize the responses of neurons to USVs and to come up with a model that would explain the mechanism that makes these neurons preferentially responsive to these relevant sounds.”
Geffen and her colleagues obtained recordings of USVs from two rats kept together in a cage, then played the recordings to a separate group of male rats, while their neuronal responses were acquired and recorded. The researchers also used USV recordings that were modified in several ways, such as having background sounds filtered out and being played backwards and at different speeds to mimic unimportant background noise. “We found that neurons in the auditory cortex respond strongly and selectively to the original ultrasonic vocalizations and not the transformed versions we created,” says Geffen.
Using the data collected on the responses of A1 neurons to various USVs, the researchers developed a computational model that could predict the activity of an individual neuron based on the pitch and duration of the USV. Geffen observes that “the details of their responses could be predicted with high accuracy.” It was possible to determine which aspects of the acoustic input best drove individual neurons. Remarkably, it turned out that the acoustic parameters that worked best in driving the neuronal responses corresponded to the statistics of the natural vocalizations rats produce.
The work makes clear for the first time, says Geffen, “the mechanisms of how the auditory system picks out behaviorally relevant sounds, such as same species communication signals, and processes them more effectively than less relevant sounds. This information is fundamental in understanding how sound perception helps animals survive. We conclude that neurons in the auditory cortex are specialized for processing and efficiently responding to natural and behaviorally relevant sounds.”
(Image: National Institute on Deafness and Other Communication)