Neuroscience

Articles and news from the latest research reports.

Posts tagged auditory cortex

80 notes

Zooming in for a safe flight
Bats do not use sight to navigate when flying. Instead, they emit ultrasound pulses and measure the echoes reflected from their surroundings. They have an extremely flexible internal navigation system that enables them to do this. A new study published in Nature Communications shows that when a bat flies close to an object, the number of active neurons in the part of a bat’s brain responsible for processing acoustic information about spatial positioning increases. This information helps these masters of flight to react rapidly and avoid obstacles.
As nocturnal animals, bats are perfectly adapted to a life without light. They emit echolocation sounds and use the delay between the reflected echoes to measure distance to obstacles or prey. In their brains, they have a spatial map representing different echo delays. A study carried out by researchers at Technische Universität München (TUM) has shown for the first time that this map dynamically adapts to external factors. Closer objects appear larger 
When a bat flies in too close to an object, the number of activated neurons in its brain increases. As a result, the object appears disproportionately larger on the bat’s brain map than objects at a safe distance, as if it were magnified. “The map is similar to the navigation systems used in cars in that it shows bats the terrain in which they are moving,” explains study director Dr. Uwe Firzlaff at the TUM Chair of Zoology. “The major difference, however, is that the bats’ inbuilt system warns them of an impending collision by enhancing neuronal signals for objects that are in close proximity.”
Bats constantly adapt their flight maneuvers to their surroundings to avoid collisions with buildings, trees or other animals. The ability to determine lateral distance to other objects also plays a key role here. Which is why bats process more spatial information than just echo delays. “Bats evaluate their own motion and map it against the lateral distance to objects,” elaborates the researcher.
Brain processes complex spatial information
In addition to the echo reflection time, bats process the reflection angle of echoes. They also compare the sound volume of their calls with those of the reflected sound waves and measure the wave spectrum of the echo. “Our research has led us to conclude that bats display much more spatial information on their acoustic maps than just echo reflection.”
The results show that the nerve cells interpret the bats’ rapid responses to external stimuli by enlarging the active area in the brain to display important information. “We may have just uncovered one of the fundamental mechanisms that enables vertebrates to adapt flexibly to continuously changing environments,” concludes Firzlaff.

Zooming in for a safe flight

Bats do not use sight to navigate when flying. Instead, they emit ultrasound pulses and measure the echoes reflected from their surroundings. They have an extremely flexible internal navigation system that enables them to do this. A new study published in Nature Communications shows that when a bat flies close to an object, the number of active neurons in the part of a bat’s brain responsible for processing acoustic information about spatial positioning increases. This information helps these masters of flight to react rapidly and avoid obstacles.

As nocturnal animals, bats are perfectly adapted to a life without light. They emit echolocation sounds and use the delay between the reflected echoes to measure distance to obstacles or prey. In their brains, they have a spatial map representing different echo delays. A study carried out by researchers at Technische Universität München (TUM) has shown for the first time that this map dynamically adapts to external factors.

Closer objects appear larger

When a bat flies in too close to an object, the number of activated neurons in its brain increases. As a result, the object appears disproportionately larger on the bat’s brain map than objects at a safe distance, as if it were magnified. “The map is similar to the navigation systems used in cars in that it shows bats the terrain in which they are moving,” explains study director Dr. Uwe Firzlaff at the TUM Chair of Zoology. “The major difference, however, is that the bats’ inbuilt system warns them of an impending collision by enhancing neuronal signals for objects that are in close proximity.”

Bats constantly adapt their flight maneuvers to their surroundings to avoid collisions with buildings, trees or other animals. The ability to determine lateral distance to other objects also plays a key role here. Which is why bats process more spatial information than just echo delays. “Bats evaluate their own motion and map it against the lateral distance to objects,” elaborates the researcher.

Brain processes complex spatial information

In addition to the echo reflection time, bats process the reflection angle of echoes. They also compare the sound volume of their calls with those of the reflected sound waves and measure the wave spectrum of the echo. “Our research has led us to conclude that bats display much more spatial information on their acoustic maps than just echo reflection.”

The results show that the nerve cells interpret the bats’ rapid responses to external stimuli by enlarging the active area in the brain to display important information. “We may have just uncovered one of the fundamental mechanisms that enables vertebrates to adapt flexibly to continuously changing environments,” concludes Firzlaff.

Filed under bats auditory cortex echolocation echo-acoustic flow biosonar neuroscience science

152 notes

Stop and Listen: Study Shows How Movement Affects Hearing
When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. This strategy helps us hear better by preventing unwanted sounds generated by our own movements.
This interplay between movement and hearing also has a counterpart deep in the brain. Indeed, indirect evidence has long suggested that the brain’s motor cortex, which controls movement, somehow influences the auditory cortex, which gives rise to our conscious perception of sound.
A new Duke study, appearing online August 27 in Nature, combines cutting-edge methods in electrophysiology, optogenetics and behavioral analysis to reveal exactly how the motor cortex, seemingly in anticipation of movement, can tweak the volume control in the auditory cortex.
The new lab methods allowed the group to “get beyond a century’s worth of very powerful but largely correlative observations, and develop a new, and really a harder, causality-driven view of how the brain works,” said the study’s senior author Richard Mooney Ph.D., a professor of neurobiology at Duke University School of Medicine, and a member of the Duke Institute for Brain Sciences.
The findings contribute to the basic knowledge of how communication between the brain’s motor and auditory cortexes might affect hearing during speech or musical performance. Disruptions to the same circuitry may give rise to auditory hallucinations in people with schizophrenia.
In 2013, researchers led by Mooney first characterized the connections between motor and auditory areas in mouse brain slices as well as in anesthetized mice. The new study answers the critical question of how those connections operate in an awake, moving mouse.
"This is a major step forward in that we’ve now interrogated the system in an animal that’s freely behaving," said David Schneider, a postdoctoral associate in Mooney’s lab.
Mooney suspects that the motor cortex learns how to mute responses in the auditory cortex to sounds that are expected to arise from one’s own movements while heightening sensitivity to other, unexpected sounds. The group is testing this idea.
"Our first step will be to start making more realistic situations where the animal needs to ignore the sounds that its movements are making in order to detect things that are happening in the world," Schneider said.
In the latest study, the team recorded electrical activity of individual neurons in the brain’s auditory cortex. Whenever the mice moved — walking, grooming, or making high-pitched squeaks — neurons in their auditory cortex were dampened in response to tones played to the animals, compared to when they were at rest.
To find out whether movement was directly influencing the auditory cortex, researchers conducted a series of experiments in awake animals using optogenetics, a powerful method that uses light to control the activity of select populations of neurons that have been genetically sensitized to light. Like the game of telephone, sounds that enter the ear pass through six or more relays in the brain before reaching the auditory cortex.
"Optogenetics can be used to activate a specific relay in the network, in this case the penultimate node that relays signals to the auditory cortex," Mooney said.
About half of the suppression during movement was found to originate within the auditory cortex itself. “That says a lot of modulation is going on in the auditory cortex, and not just at earlier relays in the auditory system” Mooney said.
More specifically, the team found that movement stimulates inhibitory neurons that in turn suppress the response of the auditory cortex to tones.
The researchers then wondered what turns on the inhibitory neurons. The suspects were many. “The auditory cortex is like this giant switching station where all these different inputs come through and say, ‘Okay, I want to have access to these interneurons,’” Mooney said. “The question we wanted to answer is who gets access to them during movement?”
The team knew from previous experiments that neuronal projections from the secondary motor cortex (M2) modulate the auditory cortex. But to isolate M2’s relative contribution — something not possible with traditional electrophysiology — the researchers again used optogenetics, this time to switch on and off the M2’s inputs to the inhibitory neurons.
Turning on M2 inputs reproduced a sense of movement in the auditory cortex, even in mice that were resting, the group found. “We were sending a ‘Hey I’m moving’ signal to the auditory cortex,” Schneider said. Then the effect of playing a tone on the auditory cortex was much the same as if the animal had actually been moving — a result that confirmed the importance of M2 in modulating the auditory cortex. On the other hand, turning off M2 simulated rest in the auditory cortex, even when the animals were still moving.
"I couldn’t contain my excitement when we first saw that result," said Anders Nelson, a neurobiology graduate student in Mooney’s group.

Stop and Listen: Study Shows How Movement Affects Hearing

When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. This strategy helps us hear better by preventing unwanted sounds generated by our own movements.

This interplay between movement and hearing also has a counterpart deep in the brain. Indeed, indirect evidence has long suggested that the brain’s motor cortex, which controls movement, somehow influences the auditory cortex, which gives rise to our conscious perception of sound.

A new Duke study, appearing online August 27 in Nature, combines cutting-edge methods in electrophysiology, optogenetics and behavioral analysis to reveal exactly how the motor cortex, seemingly in anticipation of movement, can tweak the volume control in the auditory cortex.

The new lab methods allowed the group to “get beyond a century’s worth of very powerful but largely correlative observations, and develop a new, and really a harder, causality-driven view of how the brain works,” said the study’s senior author Richard Mooney Ph.D., a professor of neurobiology at Duke University School of Medicine, and a member of the Duke Institute for Brain Sciences.

The findings contribute to the basic knowledge of how communication between the brain’s motor and auditory cortexes might affect hearing during speech or musical performance. Disruptions to the same circuitry may give rise to auditory hallucinations in people with schizophrenia.

In 2013, researchers led by Mooney first characterized the connections between motor and auditory areas in mouse brain slices as well as in anesthetized mice. The new study answers the critical question of how those connections operate in an awake, moving mouse.

"This is a major step forward in that we’ve now interrogated the system in an animal that’s freely behaving," said David Schneider, a postdoctoral associate in Mooney’s lab.

Mooney suspects that the motor cortex learns how to mute responses in the auditory cortex to sounds that are expected to arise from one’s own movements while heightening sensitivity to other, unexpected sounds. The group is testing this idea.

"Our first step will be to start making more realistic situations where the animal needs to ignore the sounds that its movements are making in order to detect things that are happening in the world," Schneider said.

In the latest study, the team recorded electrical activity of individual neurons in the brain’s auditory cortex. Whenever the mice moved — walking, grooming, or making high-pitched squeaks — neurons in their auditory cortex were dampened in response to tones played to the animals, compared to when they were at rest.

To find out whether movement was directly influencing the auditory cortex, researchers conducted a series of experiments in awake animals using optogenetics, a powerful method that uses light to control the activity of select populations of neurons that have been genetically sensitized to light. Like the game of telephone, sounds that enter the ear pass through six or more relays in the brain before reaching the auditory cortex.

"Optogenetics can be used to activate a specific relay in the network, in this case the penultimate node that relays signals to the auditory cortex," Mooney said.

About half of the suppression during movement was found to originate within the auditory cortex itself. “That says a lot of modulation is going on in the auditory cortex, and not just at earlier relays in the auditory system” Mooney said.

More specifically, the team found that movement stimulates inhibitory neurons that in turn suppress the response of the auditory cortex to tones.

The researchers then wondered what turns on the inhibitory neurons. The suspects were many. “The auditory cortex is like this giant switching station where all these different inputs come through and say, ‘Okay, I want to have access to these interneurons,’” Mooney said. “The question we wanted to answer is who gets access to them during movement?”

The team knew from previous experiments that neuronal projections from the secondary motor cortex (M2) modulate the auditory cortex. But to isolate M2’s relative contribution — something not possible with traditional electrophysiology — the researchers again used optogenetics, this time to switch on and off the M2’s inputs to the inhibitory neurons.

Turning on M2 inputs reproduced a sense of movement in the auditory cortex, even in mice that were resting, the group found. “We were sending a ‘Hey I’m moving’ signal to the auditory cortex,” Schneider said. Then the effect of playing a tone on the auditory cortex was much the same as if the animal had actually been moving — a result that confirmed the importance of M2 in modulating the auditory cortex. On the other hand, turning off M2 simulated rest in the auditory cortex, even when the animals were still moving.

"I couldn’t contain my excitement when we first saw that result," said Anders Nelson, a neurobiology graduate student in Mooney’s group.

Filed under auditory cortex hearing motor cortex optogenetics neuroscience science

207 notes

New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound
Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.
In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.
To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.
With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”
Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.

New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound

Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.

In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.

To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.

With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”

Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.

Filed under sound processing brain activity auditory cortex hearing neuroscience science

241 notes

Noise-Induced Hearing Loss Alters Brain Responses to Speech
Prolonged exposure to loud noise alters how the brain processes speech, potentially increasing the difficulty in distinguishing speech sounds, according to neuroscientists at The University of Texas at Dallas.
In a paper published this week in Ear and Hearing, researchers demonstrated for the first time how noise-induced hearing loss affects the brain’s recognition of speech sounds.
Noise-induced hearing loss (NIHL) reaches all corners of the population, affecting an estimated 15 percent of Americans between the ages of 20 and 69, according to the National Institute of Deafness and Other Communication Disorders (NIDCD).
Exposure to intensely loud sounds leads to permanent damage of the hair cells, which act as sound receivers in the ear. Once damaged, the hair cells do not grow back, leading to NIHL.
“As we have made machines and electronic devices more powerful, the potential to cause permanent damage has grown tremendously,” said Dr. Michael Kilgard, co-author and Margaret Fonde Jonsson Professor in the School of Behavioral and Brain Sciences. “Even the smaller MP3 players can reach volume levels that are highly damaging to the ear in a matter of minutes.”
Before the study, scientists had not clearly understood the direct effects of NIHL on how the brain responds to speech.
To simulate two types of noise trauma that clinical populations face, UT Dallas scientists exposed rats to moderate or intense levels of noise for an hour. One group heard a high-frequency noise at 115 decibels inducing moderate hearing loss, and a second group heard a low-frequency noise at 124 decibels causing severe hearing loss.
For comparison, the American Speech-Language-Hearing Association lists the maximum output of an MP3 player or the sound of a chain saw at about 110 decibels and the siren on an emergency vehicle at 120 decibels. Regular exposure to sounds greater than 100 decibels for more than a minute at a time may lead to permanent hearing loss, according to the NIDCD.
Researchers observed how the two types of hearing loss affected speech sound processing in the rats by recording the neuronal response in the auditory cortex a month after the noise exposure. The auditory cortex, one of the main areas that processes sounds in the brain, is organized on a scale, like a piano. Neurons at one end of the cortex respond to low-frequency sounds, while other neurons at the opposite end react to higher frequencies.
In the group with severe hearing loss, less than one-third of the tested auditory cortex sites that normally respond to sound reacted to stimulation. In the sites that did respond, there were unusual patterns of activity. The neurons reacted slower, the sounds had to be louder and the neurons responded to frequency ranges narrower than normal. Additionally, the rats could not tell the speech sounds apart in a behavioral task they could successfully complete before the hearing loss.
In the group with moderate hearing loss, the area of the cortex responding to sounds didn’t change, but the neurons’ reaction did. A larger area of the auditory cortex responded to low-frequency sounds. Neurons reacting to high frequencies needed more intense sound stimulation and responded slower than those in normal hearing animals. Despite these changes, the rats were still able to discriminate the speech sounds in a behavioral task.
“Although the ear is critical to hearing, it is just the first step of many processing stages needed to hold a conversation,” Kilgard said. “We are beginning to understand how hearing damage alters the brain and makes it hard to process speech, especially in noisy environments.”

Noise-Induced Hearing Loss Alters Brain Responses to Speech

Prolonged exposure to loud noise alters how the brain processes speech, potentially increasing the difficulty in distinguishing speech sounds, according to neuroscientists at The University of Texas at Dallas.

In a paper published this week in Ear and Hearing, researchers demonstrated for the first time how noise-induced hearing loss affects the brain’s recognition of speech sounds.

Noise-induced hearing loss (NIHL) reaches all corners of the population, affecting an estimated 15 percent of Americans between the ages of 20 and 69, according to the National Institute of Deafness and Other Communication Disorders (NIDCD).

Exposure to intensely loud sounds leads to permanent damage of the hair cells, which act as sound receivers in the ear. Once damaged, the hair cells do not grow back, leading to NIHL.

“As we have made machines and electronic devices more powerful, the potential to cause permanent damage has grown tremendously,” said Dr. Michael Kilgard, co-author and Margaret Fonde Jonsson Professor in the School of Behavioral and Brain Sciences. “Even the smaller MP3 players can reach volume levels that are highly damaging to the ear in a matter of minutes.”

Before the study, scientists had not clearly understood the direct effects of NIHL on how the brain responds to speech.

To simulate two types of noise trauma that clinical populations face, UT Dallas scientists exposed rats to moderate or intense levels of noise for an hour. One group heard a high-frequency noise at 115 decibels inducing moderate hearing loss, and a second group heard a low-frequency noise at 124 decibels causing severe hearing loss.

For comparison, the American Speech-Language-Hearing Association lists the maximum output of an MP3 player or the sound of a chain saw at about 110 decibels and the siren on an emergency vehicle at 120 decibels. Regular exposure to sounds greater than 100 decibels for more than a minute at a time may lead to permanent hearing loss, according to the NIDCD.

Researchers observed how the two types of hearing loss affected speech sound processing in the rats by recording the neuronal response in the auditory cortex a month after the noise exposure. The auditory cortex, one of the main areas that processes sounds in the brain, is organized on a scale, like a piano. Neurons at one end of the cortex respond to low-frequency sounds, while other neurons at the opposite end react to higher frequencies.

In the group with severe hearing loss, less than one-third of the tested auditory cortex sites that normally respond to sound reacted to stimulation. In the sites that did respond, there were unusual patterns of activity. The neurons reacted slower, the sounds had to be louder and the neurons responded to frequency ranges narrower than normal. Additionally, the rats could not tell the speech sounds apart in a behavioral task they could successfully complete before the hearing loss.

In the group with moderate hearing loss, the area of the cortex responding to sounds didn’t change, but the neurons’ reaction did. A larger area of the auditory cortex responded to low-frequency sounds. Neurons reacting to high frequencies needed more intense sound stimulation and responded slower than those in normal hearing animals. Despite these changes, the rats were still able to discriminate the speech sounds in a behavioral task.

“Although the ear is critical to hearing, it is just the first step of many processing stages needed to hold a conversation,” Kilgard said. “We are beginning to understand how hearing damage alters the brain and makes it hard to process speech, especially in noisy environments.”

Filed under hearing loss auditory cortex hair cells speech sounds neuroscience science

328 notes

Recent study sheds new light on second language learning in adulthood

A recent study shows that assimilation of L2 vowels to L1 phonemes governs language learning in adulthood; researchers urge development of novel methods of second language teaching.

image

The behavioral and neural evidence of the study was found by researchers at Aalto University in Finland and at the University of Salento in Italy. The study was the first one to identify the neural mechanisms underlying the learning of L2 sounds (second language) in adulthood. Overall, this and earlier studies support the hypothesis that students in a foreign language classroom should particularly benefit from learning environments where they receive a focused amount of high-quality input from L2 native teachers, use pervasively the L2 to achieve functional and communicative goals, and receive intensive training (including the use of multi-medial systems) in the perception and production of L2 sounds in order to reactivate neuroplasticity of auditory cortex.

Learning in adulthood the sounds of a second language L2 means assimilating them to the phonemes of the native language L1.

In the study, two samples of Italian students, attending first year and fifth year classes of an English Language curriculum were invited to the behavioral and electroencephalography (EEG) lab. Dr. Brattico, senior author of the study from Aalto University, explains: “The discrimination skills were measured by crossing two methodologies: on one hand, perception tests in which the students listened to couples of English sounds that I synthesized and had to judge how similar or different they were, and on the other hand, EEG recordings with 64 electrode cap, while the students were presented with the same pairs of sounds and watched a silenced movie.”

The EEG recordings were used to extract the auditory event-related potential, namely the succession of neural events necessary to the processing and representation of sound, originating from the auditory cortex.
“When we hear linguistic sounds that are part of our native tongue, in a few milliseconds the brain is able to decipher the acoustic signal, extract the peculiar characteristics of each sound and produce a mental representation of it: thus we are able to discern one sound from another and assemble first the syllables, then the words and so on”, adds the first author, Professor Grimaldi, University of Salento.

“We compared the neural responses of the auditory cortex of the two groups of university students with one another and with a control group with a low level of education (third year of junior secondary school)”, explains Grimaldi. “We started with this hypothesis: if during the academic studies the students had developed new perceptual abilities we would have found different neural responses for the three groups”. The results did not confirm the hypothesis, but instead showed that neutrally, the L2 sounds were assimilated to L1 phonemes in all the groups.

Grimaldi adds: “Let us consider, for example, what happens when we watch a movie or listen to a song in a language that we do not know: we are able to perceive acoustic differences, but we cannot `extract´ the words from the acoustic stream and accede to their meaning. This is what happened for our groups of students”. Previous behavioral studies that observed L2 learners who had different native languages in an educational context (German, Finnish, Japanese, Turkish and other English learning students) never produced results favorable for the teachers. “This study specifies confirms and extends such results, proving by means of neurophysiological data that the quantity and quality of the stimuli received by university students are not enough to form long-term traces of L2 sounds in the auditory cortex”, confirms Brattico.

The results were published online in Frontiers in Human Neuroscience.

(Source: web.aalto.fi)

Filed under auditory cortex language acquisition second language learning vowel perception neuroscience science

555 notes

Neuroscientists study our love for deep bass sounds
Have you ever wondered why bass-range instruments tend to lay down musical rhythms, while instruments with a higher pitch often handle the melody?
According to new research from Laurel Trainor and colleagues at the McMaster Institute for Music and The Mind, this is no accident, but rather a result of the physiology of hearing.
In other words, when the bass is loud and rock solid, we have an easier time following along to the rhythm of a song.
Read more

Neuroscientists study our love for deep bass sounds

Have you ever wondered why bass-range instruments tend to lay down musical rhythms, while instruments with a higher pitch often handle the melody?

According to new research from Laurel Trainor and colleagues at the McMaster Institute for Music and The Mind, this is no accident, but rather a result of the physiology of hearing.

In other words, when the bass is loud and rock solid, we have an easier time following along to the rhythm of a song.

Read more

Filed under auditory cortex pitch melody temporal perception EEG neuroscience science

80 notes

Game Technology Teaches Mice and Men to Hear Better in Noisy Environments

The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard Medical School and Harvard University programmed a new type of game that trained both mice and humans to enhance their ability to discriminate soft sounds in noisy backgrounds. Their findings will be published in PNAS Online Early Edition the week of June 9-13, 2014.

image

In the experiment, adult humans and mice with normal hearing were trained on a rudimentary ‘audiogame’ inspired by sensory foraging behavior that required them to discriminate changes in the loudness of a tone presented in a moderate level of background noise. Their findings suggest new therapeutic options for clinical populations that receive little benefit from conventional sensory rehabilitation strategies.

“Like the children’s game ‘hot and cold’, our game provided instantaneous auditory feedback that allowed our human and mouse subjects to hone in on the location of a hidden target,” said senior author Daniel Polley, Ph.D., director of the Mass. Eye and Ear’s Amelia Peabody Neural Plasticity Unit of the Eaton-Peabody Laboratories and assistant professor of otology and laryngology at Harvard Medical School. “Over the course of training, both species learned adaptive search strategies that allowed them to more efficiently convert noisy, dynamic audio cues into actionable information for finding the target. To our surprise, human subjects who mastered this simple game over the course of 30 minutes of daily training for one month exhibited a generalized improvement in their ability to understand speech in noisy background conditions. Comparable improvements in the processing of speech in high levels of background noise were not observed for control subjects who heard the sounds of the game but did not actually play the game.”

The researchers recorded the electrical activity of neurons in auditory regions of the mouse cerebral cortex to gain some insight into how training might have boosted the ability of the brain to separate signal from noise. They found that training substantially altered the way the brain encoded sound.

In trained mice, many neurons became highly sensitive to faint sounds that signaled the location of the target in the game. Moreover, neurons displayed increased resistance to noise suppression; they retained an ability to encode faint sounds even under conditions of elevated background noise.

“Again, changes of this ilk were not observed in control mice that watched (and listened) to their counterparts play the game. Active participation in the training was required; passive listening was not enough,” Dr. Polley said.

These findings illustrate the utility of brain training exercises that are inspired by careful neuroscience research. “When combined with conventional assistive devices such as hearing aids or cochlear implants, ‘audiogames’ of the type we describe here may be able to provide the hearing impaired with an improved ability to reconnect to the auditory world. Of particular interest is the finding that brain training improved speech processing in noisy backgrounds – a listening environment where conventional hearing aids offer limited benefit,” concluded Dr. Jonathon Whitton, lead author on the paper. Dr. Whitton is a principal investigator at the Amelia Peabody Neural Plasticity Unit and affiliated with the Program in Speech Hearing Bioscience and Technology, Harvard–Massachusetts Institute of Technology Division of Health, Sciences, and Technology.

(Source: masseyeandear.org)

Filed under hearing hearing loss auditory cortex foraging noise suppression neuroscience science

436 notes

Brain circuit problem likely sets stage for the “voices” that are symptom of schizophrenia

St. Jude Children’s Research Hospital scientists have identified problems in a connection between brain structures that may predispose individuals to hearing the “voices” that are a common symptom of schizophrenia. The work appears in the June 6 issue of the journal Science.

image

(Image: Getty Images)

Researchers linked the problem to a gene deletion. This leads to changes in brain chemistry that reduce the flow of information between two brain structures involved in processing auditory information.

The research marks the first time that a specific circuit in the brain has been linked to the auditory hallucinations, delusions and other psychotic symptoms of schizophrenia. The disease is a chronic, devastating brain disorder that affects about 1 percent of Americans and causes them to struggle with a variety of problems, including thinking, learning and memory.

The disrupted circuit identified in this study solves the mystery of how current antipsychotic drugs ease symptoms and provides a new focus for efforts to develop medications that quiet “voices” but cause fewer side effects.

“We think that reducing the flow of information between these two brain structures that play a central role in processing auditory information sets the stage for stress or other factors to come along and trigger the ‘voices’ that are the most common psychotic symptom of schizophrenia,” said the study’s corresponding author Stanislav Zakharenko, M.D., Ph.D., an associate member of the St. Jude Department of Developmental Neurobiology. “These findings also integrate several competing models regarding changes in the brain that lead to this complex disorder.”

The work was done in a mouse model of the human genetic disorder 22q11 deletion syndrome. The syndrome occurs when part of chromosome 22 is deleted and individuals are left with one rather than the usual two copies of about 25 genes. About 30 percent of individuals with the deletion syndrome develop schizophrenia, making it one of the strongest risk factors for the disorder. DNA is the blueprint for life. Human DNA is organized into 23 pairs of chromosomes that are found in nearly every cell.

Earlier work from Zakharenko’s laboratory linked one of the lost genes, Dgcr8, to brain changes in mice with the deletion syndrome that affect a structure important for learning and memory. They found evidence that the same mechanism was at work in patients with schizophrenia. Dgcr8 carries instructions for making small molecules called microRNAs that help regulate production of different proteins.

For this study, researchers used state-of-the-art tools to link the loss of Dgcr8 to changes that affect a different brain structure, the auditory thalamus. For decades antipsychotic drugs have been known to work by binding to a protein named the D2 dopamine receptor (Drd2). The binding blocks activity of the chemical messenger dopamine. Until now, however, how that quieted the “voices” of schizophrenia was unclear.

Working in mice with and without the 22q11 deletion, researchers showed that the strength of the nerve impulse from neurons in the auditory thalamus was reduced in mice with the deletion compared to normal mice. Electrical activity in other brain regions was not different.

Investigators showed that Drd2 levels were elevated in the auditory thalamus of mice with the deletion, but not in other brain regions. When researchers checked Drd2 levels in tissue from the same structure collected from 26 individuals with and without schizophrenia, scientists reported that protein levels were higher in patients with the disease.

As further evidence of Drd2’s role in disrupting signals from the auditory thalamus, researchers tested neurons in the laboratory from different brain regions of mutant and normal mice by adding antipsychotic drugs haloperidol and clozapine. Those drugs work by targeting Drd2. Originally nerve impulses in the mutant neurons were reduced compared to normal mice. But the nerve impulses were almost universally enhanced by antipsychotics in neurons from mutant mice, but only in neurons from the auditory thalamus.

When researchers looked more closely at the missing 22q11 genes, they found that mice that lacked the Dgcr8 responded to a loud noise in a similar manner as schizophrenia patients. Treatment with haloperidol restored the normal startle response in the mice, just as the drug does in patients.

Studying schizophrenia and other brain disorders advances understanding of normal brain development and the missteps that lead to various catastrophic diseases, including pediatric brain tumors and other problems.

(Source: stjude.org)

Filed under schizophrenia auditory cortex auditory hallucinations 22q11 deletion syndrome genetics neuroscience science

127 notes

Rhythmic bursts of electrical activity from cells in ear teach brain how to hear

A precise rhythm of electrical impulses transmitted from cells in the inner ear coaches the brain how to hear, according to a new study led by researchers at the University of Pittsburgh School of Medicine. They report the first evidence of this developmental process today in the online version of Neuron.

image

The ear generates spontaneous electrical activity to trigger a response in the brain before hearing actually begins, said senior investigator Karl Kandler, Ph.D., professor of otolaryngology and neurobiology, Pitt School of Medicine. These patterned bursts start at inner hair cells in the cochlea, which is part of the inner ear, and travel along the auditory nerve to the brain.

"It’s long been speculated that these impulses are intended to ‘wire’ the brain auditory centers," he said. "Until now, however, no one has been able to provide experimental evidence to support this concept."

To map neural connectivity, Dr. Kandler’s team prepared sections of a mouse brain containing the auditory pathways in a chemical that is inert until UV light hits it. Then, they pulsed laser light at a neuron, making the chemical active, which excites the nerve cells to generate an electrical impulse. They then tracked the spread of the impulse to adjacent cells, allowing them to map the network a neuron at a time.

All mice are born unable to hear, a sense that develops around two weeks after birth. But even before hearing starts, the ear produces rhythmic bursts of electrical activity which causes a broad reaction in the brain’s auditory processing centers. As the beat goes on, the brain organizes itself, pruning unneeded connections and strengthening others. To investigate whether the beat is indeed important for this reorganization, the team used genetically engineered mice that lack a key receptor on the inner hair cells which causes them to change their beat.

"In normal mice, the wiring diagram of the brain gets sharper and more efficient over time and they begin to hear," Dr. Kandler said. "But this doesn’t happen when the inner ear beats in a different rhythm, which means the brain isn’t getting the instructions it needs to wire itself correctly. We have evidence that these mice can detect sound, but they have problems perceiving the pitch of sounds."

In humans, such subtle hearing deficits are associated with Central Auditory-Processing Disorders (CAPD), difficulty processing the meaning of sound. About 2 to 3 percent of children are affected with CAPD and these children often have speech and language disorders or delays, and learning disabilities such as dyslexia. In contrast to causes of hearing impairments due to ear deficits, the causes underlying CAPD have remained obscure.

"Our findings suggest that an abnormal rhythm of electrical impulses early in life may be an important contributing factor in the development of CAPD. More research is needed to find out whether this also holds true for humans, but our results point to a new direction that is worth following up," Dr. Kandler said.

(Source: eurekalert.org)

Filed under nerve cells hair cells inner ear auditory cortex hearing neuroscience science

65 notes

Tracking the Source of “Selective Attention” Problems in Brain-Injured Vets

An estimated 15-20 percent of U.S. troops returning from Iraq and Afghanistan suffer from some form of traumatic brain injury (TBI) sustained during their deployment, with most injuries caused by blast waves from exploded military ordnance. The obvious cognitive symptoms of minor TBI — including learning and memory problems — can dissipate within just a few days. But blast-exposed veterans may continue to have problems performing simple auditory tasks that require them to focus attention on one sound source and ignore others, an ability known as “selective auditory attention.”

According to a new study by a team of Boston University (BU) neuroscientists, such apparent “hearing” problems actually may be caused by diffuse injury to the brain’s prefrontal lobe — work that will be described at the 167th meeting of the Acoustical Society of America, to be held May 5-9, 2014 in Providence, Rhode Island.

"This kind of injury can make it impossible to converse in everyday social settings, and thus is a truly devastating problem that can contribute to social isolation and depression," explains computational neuroscientist Scott Bressler, a graduate student in BU’s Auditory Neuroscience Laboratory, led by biomedical engineering professor Barbara Shinn-Cunningham.

For the study, Bressler, Shinn-Cunningham and their colleagues — in collaboration with traumatic brain injury and post-traumatic stress disorder expert Yelena Bogdanova of VA Healthcare Boston — presented a selective auditory attention task to 10 vets with mild TBI and to 17 control subjects without brain injuries. Notably, on average, veterans had hearing within a normal range.

In the task, three different melody streams, each comprised of two notes, were simultaneously presented to the subjects from three different perceived directions (this variation in directionality was achieved by differing the timing of the signals that reached the left and right ears). The subjects were then asked to identify the “shape” of the melodies (i.e., “going up,” “going down,” or “zig-zagging”) while their brain activity was measured by electrodes on the scalp.

"Whenever a new sound begins, the auditory cortex responds, encoding the sound onset," Bressler explains. "Attentional focus, however, changes the strength of this response: when a listener is attending to a particular sound source, the neural activity in response to that sound is greater." This change of the neural response occurs because the brain’s "executive control" regions, located in the brain’s prefrontal cortex, send signals to the auditory sensory regions of the brain, modulating their response.

The researchers found that blast-exposed veterans with TBI performed worse on the task — that is, they had difficulty controlling auditory attention — “and in all of the TBI veterans who performed well enough for us to measure their neural activity, 6 out of our 10 initial subjects, the brain response showed weak or no attention-related modulation of auditory responses,” Bressler says.

"Our hope is that some of our findings can be used to develop methods to assess and quantify TBI, identifying specific factors that contribute to difficulties communicating in everyday settings," he says. "By identifying these factors on an individual basis, we may be able to define rehabilitation approaches and coping strategies tailored to the individual."

Some TBI patients also go on to develop chronic traumatic encephalopathy (CTE) — a debilitating progressive degenerative disease with symptoms that include dementia, memory loss and depression — which can now only be definitively diagnosed after death. “With any luck,” Bressler adds, “neurobehavioral research like ours may help identify patients at risk of developing CTE long before their symptoms manifest.”

(Source: newswise.com)

Filed under TBI brain injury selective attention auditory cortex brain activity hearing neuroscience science

free counters