Posts tagged auditory cortex

Posts tagged auditory cortex
You Took the Words Right Out of My Brain
Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.
“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”
“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”
The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.
Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.
However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”
What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.
In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?
Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.
When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.
When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.
“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”
Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah
A Short Stay in Darkness May Heal Hearing Woes
Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.
The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.
"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.
"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."
Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.
"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."
Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.
After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.
The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.
But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.
The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.
"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."
The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.
Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related to tone color and rhythm of the music change over time, and recognizes which piece of music is being listened to. The method also allows pinpointing the areas in the brain that are most crucial for the processing of music. The study was published in the journal NeuroImage.

Using functional magnetic resonance imaging (fMRI), the research team at the Finnish Centre of Excellence in Interdisciplinary Music Research in the Universities of Jyväskylä and Helsinki, and the Center for Functionally Integrative Neuroscience in Aarhus University, Denmark, recorded the brain responses of participants while they were listening to a 16-minute excerpt of the album Abbey Road by the Beatles. Following this, they used computational algorithms to extract a collection of musical features from the musical recording. Subsequently, they employed a collection of machine-learning methods to train a computer model that predicts how the features of the music change over time. Finally, they develop a classifier that predicts which part of the music the participant was listening to at each time.
The researchers found that most of the musical features included in the study could be reliably predicted from the brain data. They also found that the piece being listened to could be predicted significantly better than chance. Fairly large differences were however found between participants in terms of the prediction accuracy. An interesting finding was that areas outside of the auditory cortex, including motor, limbic, and frontal areas, had to be included in the models to obtain reliable predictions, providing thus evidence for the important role of these areas in the processing of musical features.
"We believe that decoding provides a method that complements other existing methods to obtain more reliable information about the complex processing of music in the brain", says Professor Petri Toiviainen from the University of Jyväskylä. "Our results provide additional evidence for the important involvement of emotional and motor areas in music processing."
(Source: jyu.fi)
Just a few years of early musical training benefits the brain later in life
Older adults who took music lessons as children but haven’t actively played an instrument in decades have a faster brain response to a speech sound than individuals who never played an instrument, according to a study appearing November 6 in the Journal of Neuroscience. The finding suggests early musical training has a lasting, positive effect on how the brain processes sound.
As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. However, previous studies show such age-related declines are not inevitable: recent studies of musicians suggest lifelong musical training may offset these and other cognitive declines.
In the current study, Nina Kraus, PhD, and others at Northwestern University explored whether limited musical training early in life is associated with changes in the way the brain responds to sound decades later. They found that the more years study participants spent playing instruments as youth, the faster their brains responded to a speech sound.
"This study suggests the importance of music education for children today and for healthy aging decades from now," Kraus said. "The fact that musical training in childhood affected the timing of the response to speech in older adults in our study is especially telling because neural timing is the first to go in the aging adult," she added.
For the study, 44 healthy adults, ages 55-76, listened to a synthesized speech syllable (“da”) while researchers measured electrical activity in the auditory brainstem. This region of the brain processes sound and is a hub for cognitive, sensory, and reward information. The researchers discovered that, despite none of the study participants having played an instrument in nearly 40 years, the participants who completed 4-14 years of music training early in life had the fastest response to the speech sound (on the order of a millisecond faster than those without music training).
"Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults," explained Michael Kilgard, PhD, who studies how the brain processes sound at the University of Texas at Dallas and was not involved in this study. "These findings confirm that the investments that we make in our brains early in life continue to pay dividends years later," he added.
(Image: Shutterstock)
A Trace of Memory: Researchers Watch Neurons in the Brain During Learning and Memory Recall
A team of neurobiologists led by Simon Rumpel at the Research Institute of Molecular Pathology (IMP) in Vienna succeeded in tracking single neurons in the brain of mice over extended periods of time. Advanced imaging techniques allowed them to establish the processes during memory formation and recall. The results of their observations are published this week in PNAS Early Edition.
Most of our behavior – and thus our personality – is shaped by previous experience. To store the memory of these experiences and to be able to retrieve the information at will is therefore considered one of the most basic and important functions of the brain. The current model in neuroscience poses that memory is stored as long-lasting anatomical changes in synapses, the specialized structures by which nerve cells connect and signal to each other.
At the Research Institute of Molecular Pathology (IMP) in Vienna, Simon Rumpel and Kaja Moczulska used mice to study the effects of learning and memorizing on the architecture of synapses. They employed an advanced microscopic technique called in vivo two-photon imaging that allows the analysis of structures as small as a thousandth of a millimetre in the living brain.
Using this technology, the neurobiologists tracked individual neurons over the course of several weeks and analysed them repeatedly. They focussed their attention on dendritic spines that decorate the neuronal processes and correspond to excitatory synapses. The analyses were combined with behavioral experiments in which the animals underwent classic auditory conditioning. The results showed that the learning experience triggered the formation of new synaptic connections in the auditory cortex. Several of these new structures persisted over time, suggesting a long-lasting trace of memory and confirming an important prediction of the current model.
Apart from the changes during memory formation, the IMP-scientists were interested in the act of remembering. Earlier studies had shown that memory recall is associated with molecular processes similar to the initial formation of memory. These similarities have been suggested to reflect remodelling of memory traces during recall.
To test this hypothesis, previously trained mice were exposed to the auditory cue a week after conditioning while tracking dendritic spines in the auditory cortex. The results showed that although some molecular processes indeed resembled those during memory formation, the anatomical structure of the synapses did not change. These findings suggest that memory retrieval does not lead to a modification of the memory trace per se. Instead, the molecular processes triggered by memory formation and recall could reflect the stabilization of previously altered or recently retrieved synaptic connections.
The primary goal of elucidating the processes during memory formation and recall is to increase our basic knowledge. Insights gained from these studies might however help us to understand diseases of the nervous system that affect memory. They may also, in the future, provide the basis for treatments that offer relief to traumatized patients.
How and when the auditory system registers complex auditory-visual synchrony
Imagine the brain’s delight when experiencing the sounds of Beethoven’s “Moonlight Sonata” while simultaneously taking in a light show produced by a visualizer.
A new Northwestern University study did much more than that.
To understand how the brain responds to highly complex auditory-visual stimuli like music and moving images, the study tracked parts of the auditory system involved in the perceptual processing of “Moonlight Sonata” while it was synchronized with the light show made by the iTunes Jelly visualizer.
The study shows how and when the auditory system encodes auditory-visual synchrony between complex and changing sounds and images.
Much of related research looks at how the brain processes simple sounds and images. Locating a woodpecker in a tree, for example, is made easier when your brain combines the auditory (pecking) and visual (movement of the bird) streams and judges that they are synchronous. If they are, the brain decides that the two sensory inputs probably came from a single source.
While that research is important, Julia Mossbridge, lead author of the study and research associate in psychology at Northwestern, said it also is critical to expand investigations to highly complex stimuli like music and movies.
“These kinds of things are closer to what the brain actually has to manage to process in every moment of the day,” she said. “Further, it’s important to determine how and when sensory systems choose to combine stimuli across their boundaries.
“If someone’s brain is mis-wired, sensory information could combine when it’s not appropriate,” she said. “For example, when that person is listening to a teacher talk while looking out a window at kids playing, and the auditory and visual streams are integrated instead of separated, this could result in confusion and misunderstanding about which sensory inputs go with what experience.”
It was already known that the left auditory cortex is specialized to process sounds with precise, complex and rapid timing; this gift for auditory timing may be one reason that in most people, the left auditory cortex is used to process speech, for which timing is critical. The results of this study show that this specialization for timing applies not just to sounds, but to the timing of complex and dynamic sounds and images.
Previous research indicates that there are multi-sensory areas in the brain that link sounds and images when they change in similar ways, but much of this research is focused particularly on speech signals (e.g., lips moving as vowels and consonants are heard). Consequently, it hasn’t been clear what areas of the brain process more general auditory-visual synchrony or how this processing differs when sounds and images should not be combined.
“It appears that the brain is exploiting the left auditory cortex’s gift at processing auditory timing, and is using similar mechanisms to encode auditory-visual synchrony, but only in certain situations; seemingly only when combining the sounds and images is appropriate,” Mossbridge said.
Several studies have shown that expecting a reward or punishment can affect brain activity in areas responsible for processing different senses, including sight or touch. For example, research shows that these brain regions light up on brain scans when humans are expecting a treat. However, researchers know less about what happens when the reward is actually received—or an expected reward is denied. Insight on these scenarios can help researchers better understand how we learn in general.

To get a better grasp on how the brain behaves when people who are expecting a reward actually receive it, or conversely, are denied it, Tina Weis of Carl-von-Ossietzky University and her colleagues monitored the auditory cortex—the part of the brain that processes and interprets sounds—while volunteers solved a task in which they had a chance of winning 50 Euro cents with each round, signaled by a specific sound. Their findings show that the auditory cortex activity picked up both when participants were expecting a reward and received it, as well as when their expectation of receiving no reward was correct.
The article is entitled “Feedback that Confirms Reward Expectation Triggers Auditory Cortex Activity.” It appears in the Articles in Press section of the Journal of Neurophysiology, published by the American Physiological Society.
Methodology
The researchers worked with 105 healthy adult volunteers with normal hearing. While each volunteer received a functional MRI (fMRI)—a brain scan that measures brain activity during tasks—the researchers had them solve a task with sounds where they had the chance of winning money at the end of each round. At the beginning of a round participants heard a sound and had to learn if this sound signified that they could win a 50 Euro cents reward or not. They then saw a number on a screen and had to press a button to indicate whether the number was greater or smaller than 5. If the sound before indicated that they could receive a reward and they solved the number task quickly and correctly, an image of a 50 Euro cents coin appeared on the screen. The researchers monitored brain activity in the subjects’ auditory cortex throughout the task, paying special attention to what happened when they received the reward, or not, at the end of the round.
Results
The study authors found that when the volunteers were expecting and finally received a reward, then their auditory cortex was activated. Similarly, there was an increase in brain activity in this area when the subjects weren’t expecting a reward and didn’t get one. There was no additional activity when they were expecting a reward and didn’t get one.
Importance of the Findings
These findings add to accumulating evidence that the auditory cortex performs a role beyond just processing sound. Rather, this area of the brain appears to be activated during other activities that require learning and thought, such as confirming expectations of receiving a reward.
"Our findings thus support the view of a highly cognitive role of the auditory cortex," the study authors say.
(Source: eurekalert.org)
University of Utah Engineers Show Brain Depends on Vision to Hear
University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.
“For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality – and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”
The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.
In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.
Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. The experiment was done on volunteers with severe epilepsy who were undergoing surgery to treat their epilepsy.
These four test subjects were then asked to watch and listen to videos focused on a person’s mouth as they said the syllables “ba,” “va,” “ga” and “tha.” Depending on which of three different videos were being watched, the patients had one of three possible experiences as they watched the syllables being mouthed:
— The motion of the mouth matched the sound. For example, the video showed “ba” and the audio sound also was “ba,” so the patients saw and heard “ba.”
— The motion of the mouth obviously did not match the corresponding sound, like a badly dubbed movie. For example, the video showed “ga” but the audio was “tha,” so the patients perceived this disconnect and correctly heard “tha.”
— The motion of the mouth only was mismatched slightly with the corresponding sound. For example, the video showed “ba” but the audio was “va,” and patients heard “ba” even though the sound really was “va.” This demonstrates the McGurk effect – vision overriding hearing.
By measuring the electrical signals in the brain while each video was being watched, Smith and Greger could pinpoint whether auditory or visual brain signals were being used to identify the syllable in each video. When the syllable being mouthed matched the sound or didn’t match at all, brain activity increased in correlation to the sound being watched. However, when the McGurk effect video was viewed, the activity pattern changed to resemble what the person saw, not what they heard. Statistical analyses confirmed the effect in all test subjects.
“We’ve shown neural signals in the brain that should be driven by sound are being overridden by visual cues that say, ‘Hear this!’” says Greger. “Your brain is essentially ignoring the physics of sound in the ear and following what’s happening through your vision.”
Greger was senior author of the study as an assistant professor of bioengineering at the University of Utah. He recently took a faculty position at Arizona State University.
The new findings could help researchers understand what drives language processing in humans, especially in a developing infant brain trying to connect sounds and lip movement to learn language. These findings also may help researchers sort out how language processing goes wrong when visual and auditory inputs are not integrated correctly, such as in dyslexia, Greger says.

This is your brain on Vivaldi and Beatles
Listening to music activates large networks in the brain, but different kinds of music are processed differently. A team of researchers from Finland, Denmark and the UK has developed a new method for studying music processing in the brain during a realistic listening situation. Using a combination of brain imaging and computer modeling, they found areas in the auditory, motor, and limbic regions to be activated during free listening to music. They were furthermore able to pinpoint differences in the processing between vocal and instrumental music. The new method helps us to understand better the complex brain dynamics of brain networks and the processing of lyrics in music. The study was published in the journal NeuroImage.
Using functional magnetic resonance imaging (fMRI), the research team, led by Dr. Vinoo Alluri from the University of Jyväskylä, Finland, recorded the brain responses of individuals while they were listening to music from different genres, including pieces by Antonio Vivaldi, Miles Davis, Booker T. & the M.G.’s, The Shadows, Astor Piazzolla, and The Beatles. Following this, they analyzed the musical content of the pieces using sophisticated computer algorithms to extract musical features related to timbre, rhythm and tonality. Using a novel cross-validation method, they subsequently located activated brain areas that were common across the different musical stimuli.
The study revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions were activated by all musical pieces. Notable, areas in the medial orbitofrontal region and the anterior cingulate cortex, which are relevant for self-referential appraisal and aesthetic judgments, were found to be activated during the listening. A further interesting finding was that vocal and instrumental music were processed differently. In particular, the presence of lyrics was found to shift the processing of musical features towards the right auditory cortex, which suggests a left-hemispheric dominance in the processing of the lyrics. This result is in line with previous research, but now for the first time observed during continuous listening to music.
"The new method provides a powerful means to predict brain responses to music, speech, and soundscapes across a variety of contexts", says Dr. Vinoo Alluri.