Neuroscience

Articles and news from the latest research reports.

Posts tagged hearing

127 notes

Rhythmic bursts of electrical activity from cells in ear teach brain how to hear

A precise rhythm of electrical impulses transmitted from cells in the inner ear coaches the brain how to hear, according to a new study led by researchers at the University of Pittsburgh School of Medicine. They report the first evidence of this developmental process today in the online version of Neuron.

image

The ear generates spontaneous electrical activity to trigger a response in the brain before hearing actually begins, said senior investigator Karl Kandler, Ph.D., professor of otolaryngology and neurobiology, Pitt School of Medicine. These patterned bursts start at inner hair cells in the cochlea, which is part of the inner ear, and travel along the auditory nerve to the brain.

"It’s long been speculated that these impulses are intended to ‘wire’ the brain auditory centers," he said. "Until now, however, no one has been able to provide experimental evidence to support this concept."

To map neural connectivity, Dr. Kandler’s team prepared sections of a mouse brain containing the auditory pathways in a chemical that is inert until UV light hits it. Then, they pulsed laser light at a neuron, making the chemical active, which excites the nerve cells to generate an electrical impulse. They then tracked the spread of the impulse to adjacent cells, allowing them to map the network a neuron at a time.

All mice are born unable to hear, a sense that develops around two weeks after birth. But even before hearing starts, the ear produces rhythmic bursts of electrical activity which causes a broad reaction in the brain’s auditory processing centers. As the beat goes on, the brain organizes itself, pruning unneeded connections and strengthening others. To investigate whether the beat is indeed important for this reorganization, the team used genetically engineered mice that lack a key receptor on the inner hair cells which causes them to change their beat.

"In normal mice, the wiring diagram of the brain gets sharper and more efficient over time and they begin to hear," Dr. Kandler said. "But this doesn’t happen when the inner ear beats in a different rhythm, which means the brain isn’t getting the instructions it needs to wire itself correctly. We have evidence that these mice can detect sound, but they have problems perceiving the pitch of sounds."

In humans, such subtle hearing deficits are associated with Central Auditory-Processing Disorders (CAPD), difficulty processing the meaning of sound. About 2 to 3 percent of children are affected with CAPD and these children often have speech and language disorders or delays, and learning disabilities such as dyslexia. In contrast to causes of hearing impairments due to ear deficits, the causes underlying CAPD have remained obscure.

"Our findings suggest that an abnormal rhythm of electrical impulses early in life may be an important contributing factor in the development of CAPD. More research is needed to find out whether this also holds true for humans, but our results point to a new direction that is worth following up," Dr. Kandler said.

(Source: eurekalert.org)

Filed under nerve cells hair cells inner ear auditory cortex hearing neuroscience science

78 notes

Infants Benefit from Implants with More Frequency Sounds
A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.
Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.
“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.
Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.
Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.
Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.
More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.
Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”
The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.
The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.
“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”
Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Infants Benefit from Implants with More Frequency Sounds

A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.

Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.

“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.

Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.

Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.

Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.

More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.

Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”

The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.

The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.

“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”

Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Filed under implants cochlear implants speech speech perception hearing neuroscience science

65 notes

Tracking the Source of “Selective Attention” Problems in Brain-Injured Vets

An estimated 15-20 percent of U.S. troops returning from Iraq and Afghanistan suffer from some form of traumatic brain injury (TBI) sustained during their deployment, with most injuries caused by blast waves from exploded military ordnance. The obvious cognitive symptoms of minor TBI — including learning and memory problems — can dissipate within just a few days. But blast-exposed veterans may continue to have problems performing simple auditory tasks that require them to focus attention on one sound source and ignore others, an ability known as “selective auditory attention.”

According to a new study by a team of Boston University (BU) neuroscientists, such apparent “hearing” problems actually may be caused by diffuse injury to the brain’s prefrontal lobe — work that will be described at the 167th meeting of the Acoustical Society of America, to be held May 5-9, 2014 in Providence, Rhode Island.

"This kind of injury can make it impossible to converse in everyday social settings, and thus is a truly devastating problem that can contribute to social isolation and depression," explains computational neuroscientist Scott Bressler, a graduate student in BU’s Auditory Neuroscience Laboratory, led by biomedical engineering professor Barbara Shinn-Cunningham.

For the study, Bressler, Shinn-Cunningham and their colleagues — in collaboration with traumatic brain injury and post-traumatic stress disorder expert Yelena Bogdanova of VA Healthcare Boston — presented a selective auditory attention task to 10 vets with mild TBI and to 17 control subjects without brain injuries. Notably, on average, veterans had hearing within a normal range.

In the task, three different melody streams, each comprised of two notes, were simultaneously presented to the subjects from three different perceived directions (this variation in directionality was achieved by differing the timing of the signals that reached the left and right ears). The subjects were then asked to identify the “shape” of the melodies (i.e., “going up,” “going down,” or “zig-zagging”) while their brain activity was measured by electrodes on the scalp.

"Whenever a new sound begins, the auditory cortex responds, encoding the sound onset," Bressler explains. "Attentional focus, however, changes the strength of this response: when a listener is attending to a particular sound source, the neural activity in response to that sound is greater." This change of the neural response occurs because the brain’s "executive control" regions, located in the brain’s prefrontal cortex, send signals to the auditory sensory regions of the brain, modulating their response.

The researchers found that blast-exposed veterans with TBI performed worse on the task — that is, they had difficulty controlling auditory attention — “and in all of the TBI veterans who performed well enough for us to measure their neural activity, 6 out of our 10 initial subjects, the brain response showed weak or no attention-related modulation of auditory responses,” Bressler says.

"Our hope is that some of our findings can be used to develop methods to assess and quantify TBI, identifying specific factors that contribute to difficulties communicating in everyday settings," he says. "By identifying these factors on an individual basis, we may be able to define rehabilitation approaches and coping strategies tailored to the individual."

Some TBI patients also go on to develop chronic traumatic encephalopathy (CTE) — a debilitating progressive degenerative disease with symptoms that include dementia, memory loss and depression — which can now only be definitively diagnosed after death. “With any luck,” Bressler adds, “neurobehavioral research like ours may help identify patients at risk of developing CTE long before their symptoms manifest.”

(Source: newswise.com)

Filed under TBI brain injury selective attention auditory cortex brain activity hearing neuroscience science

134 notes

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned

In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.

While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.

“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).

Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.

“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.

The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.

“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”

(Image: iStockphoto)

Filed under brain structure language hearing auditory cortex deafness neuroscience science

73 notes

From Mouse Ears to Man’s?

TAU researcher uses DNA therapy in lab mice to improve cochlear implant functionality

One in a thousand children in the United States is deaf, and one in three adults will experience significant hearing loss after the age of 65. Whether the result of genetic or environmental factors, hearing loss costs billions of dollars in healthcare expenses every year, making the search for a cure critical.

image

Now a team of researchers led by Karen B. Avraham of the Department of Human Molecular Genetics and Biochemistry at Tel Aviv University’s Sackler Faculty of Medicine and Yehoash Raphael of the Department of Otolaryngology–Head and Neck Surgery at University of Michigan’s Kresge Hearing Research Institute have discovered that using DNA as a drug — commonly called gene therapy — in laboratory mice may protect the inner ear nerve cells of humans suffering from certain types of progressive hearing loss.

In the study, doctoral student Shaked Shivatzki created a mouse population possessing the gene that produces the most prevalent form of hearing loss in humans: the mutated connexin 26 gene. Some 30 percent of American children born deaf have this form of the gene. Because of its prevalence and the inexpensive tests available to identify it, there is a great desire to find a cure or therapy to treat it.

"Regenerating" neurons

Prof. Avraham’s team set out to prove that gene therapy could be used to preserve the inner ear nerve cells of the mice. Mice with the mutated connexin 26 gene exhibit deterioration of the nerve cells that send a sound signal to the brain. The researchers found that a protein growth factor used to protect and maintain neurons, otherwise known as brain-derived neurotrophic factor (BDNF), could be used to block this degeneration. They then engineered a virus that could be tolerated by the body without causing disease, and inserted the growth factor into the virus. Finally, they surgically injected the virus into the ears of the mice. This factor was able to “rescue” the neurons in the inner ear by blocking their degeneration.

"A wide spectrum of people are affected by hearing loss, and the way each person deals with it is highly variable," said Prof. Avraham. "That said, there is an almost unanimous interest in finding the genes responsible for hearing loss. We tried to figure out why the mouse was losing cells that enable it to hear. Why did it lose its hearing? The collaborative work allowed us to provide gene therapy to reverse the loss of nerve cells in the ears of these deaf mice."

Although this approach is short of improving hearing in these mice, it has important implications for the enhancement of sound perception with a cochlear implant, used by many people whose connexin 26 mutation has led to impaired hearing.

Embryonic hearing?

Inner ear nerve cells facilitate the optimal functioning of cochlear implants. Prof. Avraham’s research suggests a possible new strategy for improving implant function, particularly in people whose hearing loss gets progressively worse with time, such as those with profound hearing loss as well as those with the connexin gene mutation. Combining gene therapy with the implant could help to protect vital nerve cells, thus preserving and improving the performance of the implant.

More research remains. “Safety is the main question. And what about timing? Although over 80 percent of human and mouse genes are similar, which makes mice the perfect lab model for human hearing, there’s still a big difference. Humans start hearing as embryos, but mice don’t start to hear until two weeks after birth. So we wondered, do we need to start the corrective process in utero, in infants, or later in life?” said Prof. Avraham.

"Practically speaking, we are a long way off from treating hearing loss during embryogenesis. But we proved what we set out to do: that we can help preserve nerve cells in the inner ears of the mouse," Prof. Avraham continued. "This already looks very promising."

(Source: aftau.org)

Filed under cochlear implant hearing loss hearing nerve cells brain-derived neurotrophic factor gene therapy neuroscience science

327 notes

Nanopores underlie our ability to tune in to a single voice
Inner-ear membrane uses tiny pores to mechanically separate sounds, researchers find.
Even in a crowded room full of background noise, the human ear is remarkably adept at tuning in to a single voice — a feat that has proved remarkably difficult for computers to match. A new analysis of the underlying mechanisms, conducted by researchers at MIT, has provided insights that could ultimately lead to better machine hearing, and perhaps to better hearing aids as well.
Our ears’ selectivity, it turns out, arises from evolution’s precise tuning of a tiny membrane, inside the inner ear, called the tectorial membrane. The viscosity of this membrane — its firmness, or lack thereof — depends on the size and distribution of tiny pores, just a few tens of nanometers wide. This, in turn, provides mechanical filtering that helps to sort out specific sounds.
The new findings are reported in the Biophysical Journal by a team led by MIT graduate student Jonathan Sellon, and including research scientist Roozbeh Ghaffari, former graduate student Shirin Farrahi, and professor of electrical engineering Dennis Freeman. The team collaborated with biologist Guy Richardson of the University of Sussex.
Elusive understanding
In discriminating among competing sounds, the human ear is “extraordinary compared to conventional speech- and sound-recognition technologies,” Freeman says. The exact reasons have remained elusive — but the importance of the tectorial membrane, located inside the cochlea, or inner ear, has become clear in recent years, largely through the work of Freeman and his colleagues. Now it seems that a flawed assumption contributed to the longstanding difficulty in understanding the importance of this membrane.
Much of our ability to differentiate among sounds is frequency-based, Freeman says — so researchers had “assumed that the better we could resolve frequency, the better we could hear.” But this assumption turns out not always to be true.
In fact, Freeman and his co-authors previously found that tectorial membranes with a certain genetic defect are actually highly sensitive to variations in frequency — and the result is worse hearing, not better.
The MIT team found “a fundamental tradeoff between how well you can resolve different frequencies and how long it takes to do it,” Freeman explains. That makes the finer frequency discrimination too slow to be useful in real-world sound selectivity.
Too fast for neurons
Previous work by Freeman and colleagues has shown that the tectorial membrane plays a fundamental role in sound discrimination by carrying waves that stimulate a particular kind of sensory receptor. This process is essential in deciphering competing sounds, but it takes place too quickly for neural processes to keep pace. Nature, over the course of evolution, appears to have produced a very effective electromechanical system, Freeman says, that can keep up with the speed of these sound waves.
The new work explains how the membrane’s structure determines how well it filters sound. The team studied two genetic variants that cause nanopores within the tectorial membrane to be smaller or larger than normal. The pore size affects the viscosity of the membrane and its sensitivity to different frequencies.
The tectorial membrane is spongelike, riddled with tiny pores. By studying how its viscosity varies with pore size, the team was able to determine that the typical pore size observed in mice — about 40 nanometers across — represents an optimal size for combining frequency discrimination with overall sensitivity. Pores that are larger or smaller impair hearing.
“It really changes the way we think about this structure,” Ghaffari says. The new findings show that fluid viscosity and pores are actually essential to its performance. Changing the sizes of tectorial membrane nanopores, via biochemical manipulation or other means, can provide unique ways to alter hearing sensitivity and frequency discrimination.
William Brownell, a professor of otolaryngology at Baylor College of Medicine, says, “This is the first study to suggest that porosity may affect cochlear tuning.” This work, he adds, “could provide insight” into the development of specific hearing problems.

Nanopores underlie our ability to tune in to a single voice

Inner-ear membrane uses tiny pores to mechanically separate sounds, researchers find.

Even in a crowded room full of background noise, the human ear is remarkably adept at tuning in to a single voice — a feat that has proved remarkably difficult for computers to match. A new analysis of the underlying mechanisms, conducted by researchers at MIT, has provided insights that could ultimately lead to better machine hearing, and perhaps to better hearing aids as well.

Our ears’ selectivity, it turns out, arises from evolution’s precise tuning of a tiny membrane, inside the inner ear, called the tectorial membrane. The viscosity of this membrane — its firmness, or lack thereof — depends on the size and distribution of tiny pores, just a few tens of nanometers wide. This, in turn, provides mechanical filtering that helps to sort out specific sounds.

The new findings are reported in the Biophysical Journal by a team led by MIT graduate student Jonathan Sellon, and including research scientist Roozbeh Ghaffari, former graduate student Shirin Farrahi, and professor of electrical engineering Dennis Freeman. The team collaborated with biologist Guy Richardson of the University of Sussex.

Elusive understanding

In discriminating among competing sounds, the human ear is “extraordinary compared to conventional speech- and sound-recognition technologies,” Freeman says. The exact reasons have remained elusive — but the importance of the tectorial membrane, located inside the cochlea, or inner ear, has become clear in recent years, largely through the work of Freeman and his colleagues. Now it seems that a flawed assumption contributed to the longstanding difficulty in understanding the importance of this membrane.

Much of our ability to differentiate among sounds is frequency-based, Freeman says — so researchers had “assumed that the better we could resolve frequency, the better we could hear.” But this assumption turns out not always to be true.

In fact, Freeman and his co-authors previously found that tectorial membranes with a certain genetic defect are actually highly sensitive to variations in frequency — and the result is worse hearing, not better.

The MIT team found “a fundamental tradeoff between how well you can resolve different frequencies and how long it takes to do it,” Freeman explains. That makes the finer frequency discrimination too slow to be useful in real-world sound selectivity.

Too fast for neurons

Previous work by Freeman and colleagues has shown that the tectorial membrane plays a fundamental role in sound discrimination by carrying waves that stimulate a particular kind of sensory receptor. This process is essential in deciphering competing sounds, but it takes place too quickly for neural processes to keep pace. Nature, over the course of evolution, appears to have produced a very effective electromechanical system, Freeman says, that can keep up with the speed of these sound waves.

The new work explains how the membrane’s structure determines how well it filters sound. The team studied two genetic variants that cause nanopores within the tectorial membrane to be smaller or larger than normal. The pore size affects the viscosity of the membrane and its sensitivity to different frequencies.

The tectorial membrane is spongelike, riddled with tiny pores. By studying how its viscosity varies with pore size, the team was able to determine that the typical pore size observed in mice — about 40 nanometers across — represents an optimal size for combining frequency discrimination with overall sensitivity. Pores that are larger or smaller impair hearing.

“It really changes the way we think about this structure,” Ghaffari says. The new findings show that fluid viscosity and pores are actually essential to its performance. Changing the sizes of tectorial membrane nanopores, via biochemical manipulation or other means, can provide unique ways to alter hearing sensitivity and frequency discrimination.

William Brownell, a professor of otolaryngology at Baylor College of Medicine, says, “This is the first study to suggest that porosity may affect cochlear tuning.” This work, he adds, “could provide insight” into the development of specific hearing problems.

Filed under hearing inner ear tectorial membrane sound processing nanopores neuroscience science

154 notes

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah
A Short Stay in Darkness May Heal Hearing Woes
Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.
The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.
"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.
"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."
Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.
"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."
Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.
After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.
The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.
But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.
The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.
"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."
The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah

A Short Stay in Darkness May Heal Hearing Woes

Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.

The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.

"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.

"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."

Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.

"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."

Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.

After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.

The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.

But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.

The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.

"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."

The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.

Filed under auditory cortex hearing vision blindness neurons thalamus neuroscience science

339 notes

Tinnitus discovery opens door to possible new treatment avenues
For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.
Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.
The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.
The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.
Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.
That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.
Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.
She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.
“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.
The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.
But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.
It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 
Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.
Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.
Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.
Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.
In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.
Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.
“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.
But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Tinnitus discovery opens door to possible new treatment avenues

For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.

Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.

The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.

The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.

Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.

That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.

Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.

She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.

“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.

The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.

But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.

It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 

Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.

Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.

Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.

Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.

In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.

Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.

“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.

But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Filed under tinnitus hearing hearing loss plasticity dorsal cochlear nucleus neurons neuroscience science

510 notes

Listening to the inner voice
Perhaps the most controversial book ever written in the field of psychology, was Julian Janes’ mid-seventies classic, “The Origin of Consciousness in the Breakdown of the Bicameral Mind.” In it, Jaynes reaches the stunning conclusion that the seemingly all-pervasive and demanding gods of the ancients, were not just whimsical personifications of inanimate objects like the sun or moon, nor anthropomorphizations of the various beasts, real and mythical, but rather the culturally-barren inner voices of bilaterally-symmetric brains not yet fully connected, nor conscious, in the way we are today.
In his view, all people of the day would have “heard voices”, similar to the schizophrenic. They would have been experienced as a hallucinations of sorts, coming from outside themselves as the unignorable voices of gods, rather than as commands originating from the other side of the brain. After a long hiatus, the study the inner voice, and the larger mental baggage that comes along with having one, has returned to the fore. Vaughan Bell, a researcher from King’s College in London, recently published an insightful call to arms in PlOS Biology for psychologists and neurobiologists to create a new understanding of these phenomena.
A coherent inner narrative in synch with our actions, is something most of us take for granted. Yet not everyone can take such possession. The congenitally deaf, for example, may later acquire auditory and communicative function through the use of cochlear implants. However, their inner experiences of sound-powered word, which they acquire through the reattribution of percepts of a previous gestural or visual nature, is something not typically shared or appreciated at the level of the larger public. A similar lack of comprehension at the research community level exists regarding those with physically intact senses, but with some other mental process gone awry. We may note with familiarity the shuffling and muttering of a homeless schizophrenic, yet have no systematic way to comprehend their intuitions, no matter how deluded they may appear.
Bell notes that current neurocognitive theories tend to ignore how those who hear voices first acquire what he describes as “internalized social actors.” In addition to live social interactions, “offline” social interaction with an internal model of those individuals holding significant power in our lives would seem like a handy feature to have. We can readily imagine entirely non-pathological situations where such a model would be of benefit. A young child cut from a school basketball team which they worked hard to make, may be temporality devastated, but hardly traumatized. If they renew their efforts to make the team the next year and practice each day in their backyard, they might imagine the coach who cut them watching their every shot with a critical eye. While this hallucinated guidance would be entirely benign, if the person they imagine is instead an abusive parent or classmate, the internal model might eventually take on a more sinister nature.
It would seem that at least in some individuals, the internal model seems able to get the upper hand, particularly when that hand is forced. We might imagine a school child tasked with the tedium of a seemingly endless recitation—saying the rosary beads, for example, in the catholic school days of yore. The familiar “Hail Mary, full of Grace……” might, after so many repetitions, transform in the mind into something else, despite the earnestness of the professor of faith. “Hail Mary, full of …..” might instead be completed with a different choice word that intrudes from another collective in the brain despite the alarmed child’s efforts to suppress it. In the situation where this is vocalized externally, completely out of control as in full blown Tourette’s syndrome, the child now has a problem.
The idea that separate voices represent separate hemispheres may be a good starting point, but it can readily be dispatched as far as being the whole story. Auditory hallucinations can take the form of multiple social actors, clearly outnumbering our hemispheres, and all with different tones, personalities, and persistence of identity. Attempts have been made to localize brain activity to a particular narrative using EEG recording, or to elicit a hallucination using magnetic stimulation. While the occasional inciteful anecdote may be gleaned from these kinds of investigations, we should not expect much fine detail to ever be had from them. The cortical area known as the temporoparietal junction routinely emerges as a favorite among brain imagers because of its geometric location at the pinnacle of the major fold in the brain. Unfortunately, until there exists a large scale minimally damaging recording technology we are probably going to have to content ourselves with looking closer at what subjects have to say about their own auditory hallucinations, than what their brains might have to say.
As children we learn to talk by talking to ourselves. Unless marooned on an island, we tend to abandon this behavior in polite company for fear of stigmatization, among other things. If the line between normalcy and pathology for hearing voices, or even talking to them, (so long as they do not command undesirable physical actions), is drawn with a greater acceptance for normalcy, a clearer understanding of the inner voice might be sooner in hand.

Listening to the inner voice

Perhaps the most controversial book ever written in the field of psychology, was Julian Janes’ mid-seventies classic, “The Origin of Consciousness in the Breakdown of the Bicameral Mind.” In it, Jaynes reaches the stunning conclusion that the seemingly all-pervasive and demanding gods of the ancients, were not just whimsical personifications of inanimate objects like the sun or moon, nor anthropomorphizations of the various beasts, real and mythical, but rather the culturally-barren inner voices of bilaterally-symmetric brains not yet fully connected, nor conscious, in the way we are today.

In his view, all people of the day would have “heard voices”, similar to the schizophrenic. They would have been experienced as a hallucinations of sorts, coming from outside themselves as the unignorable voices of gods, rather than as commands originating from the other side of the brain. After a long hiatus, the study the inner voice, and the larger mental baggage that comes along with having one, has returned to the fore. Vaughan Bell, a researcher from King’s College in London, recently published an insightful call to arms in PlOS Biology for psychologists and neurobiologists to create a new understanding of these phenomena.

A coherent inner narrative in synch with our actions, is something most of us take for granted. Yet not everyone can take such possession. The congenitally deaf, for example, may later acquire auditory and communicative function through the use of cochlear implants. However, their inner experiences of sound-powered word, which they acquire through the reattribution of percepts of a previous gestural or visual nature, is something not typically shared or appreciated at the level of the larger public. A similar lack of comprehension at the research community level exists regarding those with physically intact senses, but with some other mental process gone awry. We may note with familiarity the shuffling and muttering of a homeless schizophrenic, yet have no systematic way to comprehend their intuitions, no matter how deluded they may appear.

Bell notes that current neurocognitive theories tend to ignore how those who hear voices first acquire what he describes as “internalized social actors.” In addition to live social interactions, “offline” social interaction with an internal model of those individuals holding significant power in our lives would seem like a handy feature to have. We can readily imagine entirely non-pathological situations where such a model would be of benefit. A young child cut from a school basketball team which they worked hard to make, may be temporality devastated, but hardly traumatized. If they renew their efforts to make the team the next year and practice each day in their backyard, they might imagine the coach who cut them watching their every shot with a critical eye. While this hallucinated guidance would be entirely benign, if the person they imagine is instead an abusive parent or classmate, the internal model might eventually take on a more sinister nature.

It would seem that at least in some individuals, the internal model seems able to get the upper hand, particularly when that hand is forced. We might imagine a school child tasked with the tedium of a seemingly endless recitation—saying the rosary beads, for example, in the catholic school days of yore. The familiar “Hail Mary, full of Grace……” might, after so many repetitions, transform in the mind into something else, despite the earnestness of the professor of faith. “Hail Mary, full of …..” might instead be completed with a different choice word that intrudes from another collective in the brain despite the alarmed child’s efforts to suppress it. In the situation where this is vocalized externally, completely out of control as in full blown Tourette’s syndrome, the child now has a problem.

The idea that separate voices represent separate hemispheres may be a good starting point, but it can readily be dispatched as far as being the whole story. Auditory hallucinations can take the form of multiple social actors, clearly outnumbering our hemispheres, and all with different tones, personalities, and persistence of identity. Attempts have been made to localize brain activity to a particular narrative using EEG recording, or to elicit a hallucination using magnetic stimulation. While the occasional inciteful anecdote may be gleaned from these kinds of investigations, we should not expect much fine detail to ever be had from them. The cortical area known as the temporoparietal junction routinely emerges as a favorite among brain imagers because of its geometric location at the pinnacle of the major fold in the brain. Unfortunately, until there exists a large scale minimally damaging recording technology we are probably going to have to content ourselves with looking closer at what subjects have to say about their own auditory hallucinations, than what their brains might have to say.

As children we learn to talk by talking to ourselves. Unless marooned on an island, we tend to abandon this behavior in polite company for fear of stigmatization, among other things. If the line between normalcy and pathology for hearing voices, or even talking to them, (so long as they do not command undesirable physical actions), is drawn with a greater acceptance for normalcy, a clearer understanding of the inner voice might be sooner in hand.

Filed under hallucinations temporoparietal junction inner voice hearing psychology neuroscience science

143 notes

Listen to this: Research upends understanding of how humans perceive sound
A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.
The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.
“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”
The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.
Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.
The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.
Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.
“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.
The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.
“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”
Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.
As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.
“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”

Listen to this: Research upends understanding of how humans perceive sound

A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.

The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.

“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”

The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.

Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.

The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.

Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.

“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.

The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.

“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”

Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.

As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.

“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”

Filed under hearing hearing loss adaptation hair cells inner ear ion channels neuroscience science

free counters