Posts tagged hearing

Posts tagged hearing
As Baby Boomers age, many experience difficulty in hearing and understanding conversations in noisy environments such as restaurants. People who are hearing-impaired and who wear hearing aids or cochlear implants are even more severely impacted. Researchers know that the ability to locate the source of a sound with ease is vital to hear well in these types of situations, but much more information is needed to understand how hearing works to be able to design devices that work better in noisy environment.
Researchers from the Eaton-Peabody Laboratories of the Massachusetts Eye and Ear, Harvard Medical School, and Research Laboratory of Electronics, Massachusetts Institute of Technology have gained new insight into how localized hearing works in the brain. Their research is published in the Oct. 2, 2013 issue of the Journal of Neuroscience.
“Most people are able to locate the source of a sound with ease, for example, a snapping twig on the left, or a honking horn on the right. However this is actually a difficult problem for the brain to solve,” said Mitchell L. Day, Ph.D., investigator in the Eaton-Peabody Laboratories at Mass. Eye and Ear and instructor of Otology and Laryngology at Harvard Medical School “The higher levels of the brain that decide the direction a sound is coming from do not have access to the actual sound, but only the representation of that sound in the electrical activity of neurons at lower levels in the brain. How higher levels of the brain use information contained in the electrical activity of these lower-level neurons to create the perception of sound location is not known.”
In the experiment, researchers recorded the electrical activity of individual neurons in an essential lower-level auditory brain area called the inferior colliculus (IC) while an animal listened to sounds coming from different directions. They found that the location of a sound source could be accurately predicted from the pattern of activation across a population of less than 100 IC neurons – i.e., a particular pattern of IC activation indicated a particular location in space. Researchers further found that the pattern of IC activation could correctly distinguish whether there was a single sound source present or two sources coming from different directions – i.e., the pattern of IC activation could segregate concurrent sources.
“Our results show that higher levels of the brain may be able to accurately segregate and localize sound sources based on the detection of patterns in a relatively small population of IC neurons,” said Dr. Day. “We hope to learn more so that someday we can design devices that work better in noisy environments.”
(Source: masseyeandear.org)

UI study shows fruit fly is ideal model to study hearing loss in people
If your attendance at too many rock concerts has impaired your hearing, listen up.
University of Iowa researchers say that the common fruit fly, Drosophila melanogaster, is an ideal model to study hearing loss in humans caused by loud noise. The reason: The molecular underpinnings to its hearing are roughly the same as with people.
As a result, scientists may choose to use the fruit fly to quicken the pace of research into the cause of noise-induced hearing loss and potential treatment for the condition, according to a paper published this week in the online Early Edition of the journal Proceedings of the National Academy of Sciences.
“As far as we know, this is the first time anyone has used an insect system as a model for NIHL (noise-induced hearing loss),” says Daniel Eberl, UI biology professor and corresponding author on the study.
Hearing loss caused by loud noise encountered in an occupational or recreational setting is an expensive and growing health problem, as young people use ear buds to listen to loud music and especially as the aging Baby Boomer generation enters retirement. Despite this trend, “the molecular and physiological models involved in the problem or the recovery are not fully understood,” Eberl notes.
Enter the fruit fly as an unlikely proxy for researchers to learn more about how loud noises can damage the human ear. Eberl and Kevin Christie, lead author on the paper and a post-doctoral researcher in biology, say they were motivated by the prospect of finding a model that may hasten the day when medical researchers can fully understand the factors involved in noise-induced hearing loss and how to alleviate the problem. The study arose from a pilot project conducted by UI undergraduate student Wes Smith, in Eberl’s lab.
“The fruit fly model is superior to other models in genetic flexibility, cost, and ease of testing,” Christie says.
The fly uses its antenna as its ear, which resonates in response to courtship songs generated by wing vibration. The researchers exposed a test group of flies to a loud, 120 decibel tone that lies in the center of a fruit fly’s range of sounds it can hear. This over-stimulated their auditory system, similar to exposure at a rock concert or to a jack hammer. Later, the flies’ hearing was tested by playing a series of song pulses at a naturalistic volume, and measuring the physiological response by inserting tiny electrodes into their antennae. The fruit flies receiving the loud tone were found to have their hearing impaired relative to the control group.
When the flies were tested again a week later, those exposed to noise had recovered normal hearing levels. In addition, when the structure of the flies’ ears was examined in detail, the researchers discovered that nerve cells of the noise-rattled flies showed signs that they had been exposed to stress, including altered shapes of the mitochondria, which are responsible for generating most of a cell’s energy supply. Flies with a mutation making them susceptible to stress not only showed more severe reductions in hearing ability and more prominent changes in mitochondria shape, they still had deficits in hearing 7 days later, when normal flies had recovered.
The effect on the molecular underpinnings of the fruit fly’s ear are the same as experienced by humans, making the tests generally applicable to people, the researchers note.
“We found that fruit flies exhibit acoustic trauma effects resembling those found in vertebrates, including inducing metabolic stress in sensory cells,” Eberl says. “Our report is the first to report noise trauma in Drosophila and is a foundation for studying molecular and genetic conditions resulting from NIHL.”
“We hope eventually to use the system to look at how genetic pathways change in response to NIHL. Also, we would like to learn how the modification of genetic pathways might reduce the effects of noise trauma,” Christie adds.

Brain picks out salient sounds from background noise by tracking frequency and time
New research reveals how our brains are able to pick out important sounds from the noisy world around us. The findings, published online today in the journal ‘eLife’, could lead to new diagnostic tests for hearing disorders.
Our ears can effortlessly pick out the sounds we need to hear from a noisy environment - hearing our mobile phone ringtone in the middle of the Notting Hill Carnival, for example - but how our brains process this information (the so-called ‘cocktail party problem’) has been a longstanding research question in hearing science.
Researchers have previously investigated this using simple sounds such as two tones of different pitches, but now researchers at UCL and Newcastle University have used complicated sounds that are more representative of those we hear in real life. The team used ‘machine-like beeps’ that overlap in both frequency and time to recreate a busy sound environment and obtain new insights into how the brain solves this problem.
In the study, groups of volunteers were asked to identify target sounds from within this noisy background in a series of experiments.
Sundeep Teki, a PhD student from the Wellcome Trust Centre for Neuroimaging at UCL and joint first author of the study, said: “Participants were able to detect complex target sounds from the background noise, even when the target sounds were delivered at a faster rate or there was a loud disruptive noise between them.”
Dr Maria Chait, a senior lecturer at UCL Ear Institute and joint first author on the study, adds: “Previous models based on simple tones suggest that people differentiate sounds based on differences in frequency, or pitch. Our findings show that time is also an important factor, with sounds grouped as belonging to one object by virtue of being correlated in time.”
Professor Tim Griffiths, Professor of Cognitive Neurology at Newcastle University and lead researcher on the study, said: “Many hearing disorders are characterised by the loss of ability to detect speech in noisy environments. Disorders like this that are caused by problems with how the brain interprets sound information, rather than physical damage to the ear and hearing machinery, remain poorly understood.
"These findings inform us about a fundamental brain mechanism for detecting sound patterns and identifies a process that can go wrong in hearing disorders. We now have an opportunity to create better tests for these types of hearing problems."

First man to hear people before they speak
"I told my daughter her living room TV was out of sync. Then I noticed the kitchen telly was also dubbed badly. Suddenly I noticed that her voice was out of sync too. It wasn’t the TV, it was me."
Ever watched an old movie, only for the sound to go out of sync with the action? Now imagine every voice you hear sounds similarly off-kilter – even your own. That’s the world PH lives in. Soon after surgery for a heart problem, he began to notice that something wasn’t quite right.
"I was staying with my daughter and they like to have the television on in their house. I turned to my daughter and said ‘you ought to get a decent telly, one where the sound and programme are synchronised’. I gave a little chuckle. But they said ‘there’s nothing wrong with the TV’."
Puzzled, he went to the kitchen to make a cup of tea. “They’ve got another telly up on the wall and it was the same. I went into the lounge and I said to her ‘hey you’ve got two TVs that need sorting!’.”
That was when he started to notice that his daughter’s speech was out of time with her lip movements too. “It wasn’t the TV, it was me. It was happening in real life.”
PH is the first confirmed case of someone who hears people speak before registering the movement of their lips. His situation is giving unique insights into how our brains unify what we hear and see.
It’s unclear why PH’s problem started when it did – but it may have had something to do with having acute pericarditis, inflammation of the sac around the heart, or the surgery he had to treat it.
Brain scans after the timing problems appeared showed two lesions in areas thought to play a role in hearing, timing and movement. “Where these came from is anyone’s guess,” says PH. “They may have been there all my life or as a result of being in intensive care.”
Disconcerting delay
Several weeks later, PH realised that it wasn’t just other people who were out of sync: when he spoke, he registered his words before he felt his jaw make the movement. “It felt like a significant delay, it sort of snuck up on me. It was very disconcerting. At the time I didn’t know whether the delay was going to get bigger, but it seems to have stuck at about a quarter of a second.”
Light and sound travel at different speeds, so when someone speaks, visual and auditory inputs arrive at our eyes and ears at different times. The signals are then processed at different rates in the brain. Despite this, we normally perceive the events as happening simultaneously – but how the brain achieves this is unclear.
To investigate PH’s situation, Elliot Freeman at City University London and colleagues performed a temporal order judgement test. PH was shown clips of people talking and was asked whether the voice came before or after the lip movements. Sure enough, he said it came before, and to perceive them as synchronous the team had to play the voice about 200 milliseconds later than the lip movements.
The team then carried out a second, more objective test based on the McGurk illusion. This involves listening to one syllable while watching someone mouth another; the combination makes you perceive a third syllable.
Since PH hears people speaking before he sees their lips move, the team expected the illusion to work when they delayed the voice. So they were surprised to get the opposite result: presenting the voice 200 ms earlier than the lip movements triggered the illusion, suggesting that his brain was processing the sight before the sound in this particular task.
And it wasn’t only PH who gave these results. When 37 others were tested on both tasks, many showed a similar pattern, though none of the mismatches were noticeable in everyday life.
Many clocks
Freeman says this implies that the same event in the outside world is perceived by different parts of your brain as happening at different times. This suggests that, rather than one unified “now”, there are many clocks in the brain – two of which showed up in the tasks – and that all the clocks measure their individual “nows” relative to their average.
In PH’s case, one or more of these clocks has been significantly slowed – shifting his average – possibly as a result of the lesions. Freeman thinks PH’s timing discrepancies may be too large and have happened too suddenly for him to ignore or adapt to, resulting in him being aware of the asynchrony in everyday life. He may perceive just one of his clocks because it is the only one he has conscious access to, says Freeman.
PH says that in general he has learned to live with the sensory mismatch but admits he has trouble in noisy places or at large meetings. Since he hears himself speak before he feels his mouth move, does he ever feel like he’s not in control of his own voice? “No, I’m definitely sure it’s me that’s speaking,” he says, “it’s just a strange sensation.”
Help may be at hand: Freeman is looking for a way to slow down PH’s hearing so it matches what he is seeing. PH says he would be happy to trial a treatment, but he’s actually not that anxious to fix the problem. “It’s not life-threatening,” he says. “You learn to live with these things as you get older. I don’t expect my body to work perfectly.”

Hearing loss from loud blasts may be treatable
Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.
Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.
“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)
“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.
The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.
More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.
The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.
Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.
Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.
In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.
After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.
“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.
“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”
Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.
“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”
Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.
“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”
When a pedestrian hears the screech of a car’s brakes, she has to decide whether, and if so, how, to move in response. Is the action taking place blocks away, or 20 feet to the left?
One of the truly primal mechanisms that we depend on every day of our lives — acting on the basis of information gathered by our sense of hearing — is yielding its secrets to modern neuroscience. A team of researchers from Cold Spring Harbor Laboratory (CSHL) today publishes experimental results in the journal Nature which they describe as surprising. The results fill in a key piece of the puzzle about how mammals act on the basis of sound cues.
It’s well known that sounds detected by the ears wind up in a part of the brain called the auditory cortex, where they are translated – transduced – into information that scientists call representations. These representations, in turn, form the informational basis upon which other parts of the brain can make decisions and issue commands for specific actions. What scientists have not understood is what happens between the auditory cortex and portions of the brain that ultimately issue commands, say, for muscles to move in response to the sound of that car’s screeching brakes.
To find out, CSHL Professor Anthony Zador and Dr. Petr Znamenskiy trained rats to listen to sounds and to make decisions based on those sounds. When a high-frequency sound is played, the animals are rewarded if they move to the left. When the sound is low-pitched, the reward is given if the animal moves right.

To the striatum
On the simplest level, says Zador, “we know that sound is coming into the ear; and we know what’s coming out in the end – a decision,” in the form of a muscle movement. The surprise, he says, is the destination of the information used by the animal to perform this task of discriminating between sounds of high and low frequency, as revealed in his team’s experiments.
“It turns out the information passes through a particular subset of neurons in the auditory cortex whose axons wind up in another part of the brain, called the striatum,” says Zador. The classic series of experiments that provided inspiration and a model for this work, performed at Stanford University by William Newsome and colleagues, involved the visual system of primates, and had led Zador to expect by analogy that representations formed in the auditory cortex would lead to other locations within the cortex.
These experiments in rats have implications for how neural circuits make decisions, according to Zador. Even though many neurons in auditory cortex are “tuned” to low or high frequencies, most do not transmit their information directly to the striatum. Rather, their information is transmitted by a much smaller number of neurons in their vicinity, which convey their “votes” directly to the striatum.
“This is like the difference between a direct democracy and a representative democracy, of the type we have in the United States,” Zador explains. “In a direct democracy model of how the auditory cortex conveys information to the rest of the brain, every neuron activated by a low- or high-pitched sound would have a ‘vote.’ Since there is noise in every perception, some minority of neurons will indicate ‘low’ when the sound is in fact ‘high,’ and vice-versa. In the direct democracy model, the information sent to the striatum for further action would be the equivalent of a simple sum of all these votes.
“In contrast – and this is what we found to be the case – the neurons registering ‘high’ and ‘low’ are represented by a specialized subset of neurons in their local area, which we might liken to members of Congress or the Electoral College: these in turn transmit the votes of the larger population to the place — in this case the auditory striatum — in which decisions are made and actions are taken.”
(Source: cshl.edu)
New understanding of hearing loss
A major breakthrough in the understanding of hearing and noise-induced hearing loss has been made by hearing scientists from three Pacific Rim universities.
Scientists from The University of Auckland, the University of New South Wales in Sydney, and the University of California in San Diego have collaborated for nearly 20 years on this research.
“This work represents a paradigm shift in understanding how our ears respond to noise exposure,” says Professor Peter Thorne from The University of Auckland, who is one of the co-authors of two papers published recently in the prestigious journal, the Proceedings of the National Academy of Sciences (PNAS) [1, 2].
“We demonstrate that what we traditionally regard as a temporary hearing loss from noise exposure is in fact the cochlea of the inner ear adapting to the noisy environment, turning itself down in order to be able to detect new signals that appear in the noise,” he says.
After the noise is turned off, hearing remains temporarily dull for some time while it readjusts to the lack of noise.
“Clinically, this is what we measure as a temporary hearing loss,” says Professor Thorne. “This has always been regarded as an indication of noise damage rather than, in our new view, a normal physiological process.”
The researchers show that this is due to a molecular signalling pathway in the cochlea, mediated by a chemical compound called ATP, released by the cochlear tissue with noise and activating specific ATP receptors in the cochlear cells.
“Interestingly, if the pathway is removed, such as by genetic manipulations, this adaptive mechanism doesn’t occur and the ear becomes very vulnerable to longer term noise exposure and the effects of age, eventually resulting in permanent hearing loss.”
“In other words the adaptive mechanism also protects the ear,” says Professor Thorne.
The second paper, done in collaboration with United States colleagues, reveals a new genetic cause of deafness in humans which involves exactly the same mechanism.
People (two families in China) who had a mutation in the ATP receptor showed a rapidly progressing hearing loss which was accelerated if they worked in noisy environments.
“This work is important because it shows that our ears naturally adapt to their environment, a bit like pupils of the eye which dilate or constrict with light, but over a longer time course,” Professor Thorne says.
This inherent adaptive process also provides protection to the ear from noise and age-related wear and tear. If people don’t have the genes that produce this protection, then they are more likely susceptible to developing hearing loss.
“This may go some way to explaining why some people are very vulnerable to noise or develop hearing loss with age and others don’t,” he says.
“Our research demonstrates that what we have always thought was temporary noise damage (i.e. the temporary hearing loss experienced in night clubs or a day’s work in factories), may not be this, but instead, is the ear regulating its sensitivity in background noise”.
“Although our research suggests that our hearing adapts in some noise environments, this has limits,” says Professor Thorne. “If we exceed the safe dose of noise, our ears can still be damaged permanently despite this apparent protective mechanism.”
“People need to protect their ears from constant noise exposure to prevent hearing loss and this is particularly important in the workplace and with personal music devices which can deliver high sound levels for long periods of time,” he says.
New research from the Massachusetts Eye and Ear, Harvard Medical School and Harvard Program in Speech and Hearing Bioscience and Technology may have discovered a key piece in the puzzle of how hearing works by identifying the role of the olivocochlear efferent system in protecting ears from hearing loss. The findings could eventually lead to screening tests to determine who is most susceptible to hearing loss. Their paper is published today in the Journal of Neuroscience.
Until recently, it was common knowledge that exposure to a noisy environment (concert, iPod, mechanical tools, firearm, etc.), could lead to permanent or temporary hearing loss. Most audiologists would assess the damage caused by this type of exposure by measuring hearing thresholds, the lowest level at which one starts to detect/sense a sound at a particular frequency (pitch). Drs. Sharon Kujawa and Charles Liberman, both researchers at Mass. Eye and Ear, showed in 2009 that noise exposures leading to a temporary hearing loss in mice (when hearing thresholds return to what they were before exposure) in fact can be associated with cochlear neuropathy, a situation in which, despite having a normal threshold, a portion of auditory nerve fibers is missing).
The inner ear, the organ that converts sounds into messages that will be conveyed to and decoded by the brain, receives in turn fibers from the central nervous system. Those fibers are known as the olivocochlear efferent system. Up to now, the involvement of this efferent system in the protection from acoustic injury – although clearly demonstrated – has been a matter of debate because all the previous experiments were probing its protective effects following noise exposures very unlikely to be found in nature.
Stephane Maison, Ph.D., investigator at the Eaton-Peabody Laboratory at Mass. Eye and Ear and lead author, explains. “Humans are currently exposed to the type of noise used in those experiments but it’s hard to conceive that some vertebrates, thousands of years ago, were submitted to stimuli similar to those delivered by speakers. So many researchers believed that the protective effects of the efferent system were an epiphenomenon – not its true function.”
Instead of using loud noise exposures evoking a change in hearing threshold, we used a moderate noise exposure at a level similar to those found in restaurants, conferences, malls, and also in nature (some frogs emit vocalizations at similar or higher levels) and instead of looking at thresholds, we looked for signs of cochlear neuropathy, Dr. Maison continued.
The researchers demonstrated that such moderate exposure lead to cochlear neuropathy (loss of auditory nerve fibers), which causes difficulty to hear in noisy environments.
"This is tremendously important because all of us are submitted to such acoustic environments and it takes a lot of auditory nerve fiber loss before it gets to be detected by simply measuring thresholds as it’s done when preforming an audiogram," Dr. Maison said. "The second important discovery is that, in mice where the efferent system has been surgically removed, cochlear neuropathy is tremendously exacerbated. That second piece proves that the efferent system does play a very important role in protecting the ear from cochlear neuropathy and we may have found its main function."
The researchers say they are excited about this discovery because the strength of the efferent system can be recorded non-invasively in humans and a non-invasive assay to record the efferent system strength has already been developed and shows that one is able to predict vulnerability to acoustic injury (Maison and Liberman, Predicting vulnerability to acoustic injury with a noninvasive assay of olivocochlear reflex strength, Journal of Neuroscience, 20:4701-4707, 2000).
"One could envision applying this assay or a modified version of it to human populations to screen for individuals most at risk in noise environments," Dr. Maison concluded.
(Source: eurekalert.org)

Now hear this: Researchers identify forerunners of inner-ear cells that enable hearing
Researchers at the Stanford University School of Medicine have identified a group of progenitor cells in the inner ear that can become the sensory hair cells and adjacent supporting cells that enable hearing. Studying these progenitor cells could someday lead to discoveries that help millions of Americans suffering from hearing loss due to damaged or impaired sensory hair cells.
“It’s well known that, in mammals, these specialized sensory cells don’t regenerate after damage,” said Alan Cheng, MD, assistant professor of otolaryngology. (In contrast, birds and fish are much better equipped: They can regain their sensory cells after trauma caused by noise or certain drugs.) “Identifying the progenitor cells, and the cues that trigger them to become sensory cells, will allow us to better understand not just how the inner ear develops, but also how to devise new ways to treat hearing loss and deafness.”
The research was published online Feb. 26 in Development. Cheng is the senior author. Former medical student Taha Jan, MD, and postdoctoral scholar Renjie Chai, PhD, share lead authorship of the study. Roel Nusse, PhD, a professor of developmental biology, is a co-senior author of the research.
The inner ear is a highly specialized structure for gathering and transmitting vibrations in the air. The auditory compartment, called the cochlea, is a snail-shaped cavity that houses specialized cells with hair-like projections that sense vibration, much like seaweed waving in the ocean current. These hair cells are responsible for both hearing and balance, and are surrounded by supporting cells that are also critical for hearing.
Twenty percent of all Americans, and up to 33 percent of those ages 65-74, suffer from hearing loss. Hearing aids and, in severe cases, cochlear implants can be helpful for many people, but neither address the underlying cause: the loss of hair cells in the inner ear. Cheng and his colleagues identified a class of cells called tympanic border cells that can give rise to hair cells and the cells that support them during a phase of cochlear maturation right after birth.
“Until now, these cells have had no clear function,” said Cheng. “We used several techniques to define their behavior in cell culture dishes, as well as in mice. I hope these findings will lead to new areas of research to better understand how our ears develop and perhaps new ways to stimulate the regeneration of sensory cells in the cochlea.”

Children with auditory processing disorder may now have more treatment options
Several Kansas State University faculty members are helping children with auditory processing disorder receive better treatment.
Debra Burnett, assistant professor of family studies and human services and a licensed speech-language pathologist, started the Enhancing Auditory Responses to Speech Stimuli, or EARSS, program. The Kansas State University Speech and Hearing Center offers the program, which uses evidence-based practices to treat auditory processing disorder.
Other Kansas State University faculty members involved in the program include Melanie Hilgers, clinic director and instructor in family studies and human services, and Robert Garcia, audiologist and program director for communication sciences and disorders. Several graduate students also are involved.
Auditory processing disorder affects how the brain processes language. Children and adults with auditory processing disorder have normal hearing sensitivity and will pass a hearing test, but their brains do not appropriately process what they hear.
"A lot of therapy targets these skills," Burnett said. "It’s almost like relaying the road in the brain that deals with auditory information. For whatever reason, it didn’t develop properly, so the therapy is about reworking these skills."
Burnett and collaborators started the program after attending a conference for the Kansas State Speech-Language-Hearing Association. The conference included a workshop on ways to incorporate speech-language pathologists into therapy for auditory processing disorder.
"In the past, it has kind of been in the domain of the audiologist to do all of the testing and all of the therapy," Burnett said. "Speech-language pathologists have been involved in some augmentative therapy, but not in the core therapy. That is all starting to change."
Last summer Burnett and her colleagues decided to start a Kansas State University therapy program that involves speech-language pathologists. Seven children were involved in the program during the summer, two children were involved during the fall semester and one child has continued the program during the spring semester. The children all have been diagnosed with auditory processing disorder. They range in age from 8 to 14 years old and were from north-central Kansas.
Before children begin the program, Burnett performs a pretest to determine their needs and the best way to approach therapy with them. A graduate student clinician, supervised by a licensed speech-language pathologist, meets with the children one hour per week to participate in activities that improve their auditory processing skills. Some of the activities include:
At the end of the program, Burnett performs a posttest to identify changes. The researchers have seen positive results so far: All of the children who participated in the posttest showed improvements in the treated areas. In the areas that the researchers did not treat, the children showed no change but also did not get worse.
"Based on these results, our program is showing early signs of being effective," Burnett said.