Posts tagged hearing

Posts tagged hearing
June 22, 2012 By Virat Markandeya
Listening to a single voice in a crowded cocktail party sometimes seems like picking a needle out of a haystack, but new research shows that people may be better at this than expected.

New research shows that people can comprehend one sound among many.
The results surprised the University of Washington, Seattle, research team, which tested how well people could pick out one sound from a dense collection of noises.
The researchers asked ten subjects to listen to multiple streams of letters. A stream consisted of a repeating letter, for example, Q-Q-Q-Q. If four streams were played, the listener heard four different repeating letters, say, D, C, Q and J. The letters came fast —the time interval between each letter was just one-twelfth of a second.
In front of the listener was a computer screen. Before the start of each trial, the researchers put one of the four letters on the screen to prime the subject to focus on it. If he heard an oddball letter in that stream, such as R instead of Q, he was to press a button.
To make it easier on the listener, each letter stream carried a different pitch and came from a different location in the room. R was chosen as the oddball because it doesn’t rhyme with any other letter.
"Unlike most experiments where you try to make it difficult for the listener to do the task, we tried to give every advantage we could," said Adrian K.C. Lee, a speech and hearing researcher at the university, who worked closely with Ross Maddox.
As expected, when the number of streams went up, the ability to discern the letter came down. But even with 12 streams the letter was identified correctly around 70 percent of the time.
"We expected that 12 streams would have broken the upper limits of the [subject’s hearing] system," said Lee. "It is surprising that even with twelve things coming at you at the same time you can lock on to one with reasonably high accuracy."
The work was presented last month at the Acoustics 2012 Hong Kong conference.
Down the line, the researchers want to use these experiments to design a way for paralyzed patients to control a wheelchair or a computer using brain signals. Such devices, called brain-computer interfaces, have mostly relied on visual or motor stimuli. Typically, a subject might focus on a visual cue or imagine making a movement. Using a machine that detects brain signals, such as an electroencephalogram, researchers would attempt to characterize the brain responses connected with that task and translate them into commands. Focusing on an auditory signal too produces brain signals that can be characterized. However, the current study did not look at brain signals.
A very practical reason to look at auditory interfaces is that eye-gaze control — on which visually-controlled interfaces are based — is often absent in people in a late stage of a neurodegenerative disease, said Martijn Schreuder, a researcher at the Berlin Institute of Technology.
Schreuder, who has worked on an interface where subjects spelled words by focusing on particular sounds, pointed out that auditory interfaces allow someone who is completely blind to communicate.
Schreuder said Lee’s work provides hints on “whether or not it’s good or bad to have different [audio] streams or whether it is good to have a quicker repetition or not.” To his knowledge, this is the first time researchers have gone up to 12 streams. Previous research included only two streams.
The other part Schreuder found interesting was how quickly the listeners learned how to discriminate between letter streams.
"There is a difference between being able to spell one letter every two minutes or spelling three letters per minute, which is the range [brain-computer interfaces] go," Schreuder said. "So if one selection takes 20 seconds, it’s worse than if it goes 10 seconds."
The University of Washington researchers are planning follow-up experiments to directly investigate how the brain responds to audio streams.
Provided by Inside Science News Service
Source: medicalxpress.com
ScienceDaily (June 11, 2012) — The world needs new antibiotics to overcome the ever increasing resistance of disease-causing bacteria — but it doesn’t need the side effect that comes with some of the most powerful ones now available: hearing loss. Today, researchers report they have developed a new approach to designing antibiotics that kill even “superbugs” but spare the delicate sensory cells of the inner ear.

These delicate hair cells from the inner ear of mice were tested to see the effects of powerful antibiotics on structures that are crucial to hearing. At left, cells that were exposed to the antibiotic gentamycin showed signs of high levels of damaging free radicals (seen in green). But cells treated with the veterinary drug apramycin. shown at right, didn’t show these effects — adding to evidence that this drug could be used to treat humans without damaging hearing. (Credit: University of Michigan, Schacht laboratory)
Surprisingly, they have found that apramycin, an antibiotic already used in veterinary medicine, fits this bill — setting the stage for testing in humans.
In a paper published online in the Proceedings of the National Academy of Sciences, a team from Switzerland, England and the University of Michigan show apramycin’s high efficacy against bacteria, and low potential for causing hearing loss, through a broad range of tests in animals. That testing platform is now being used to evaluate other potential antibiotics that could tackle infections such as multidrug-resistant tuberculosis.
The research aims to overcome a serious limitation of aminoglycoside antibiotics, a class of drugs which includes the widely used kanamycin, gentamicin and amikacin.
While great at stopping bacterial infections, these drugs also cause permanent partial hearing loss in 20 percent of people who take them for a short course, and up to 100 percent of people who take them over months or years, for example to treat tuberculosis or lung infections in cystic fibrosis.
U-M researcher Jochen Schacht, Ph.D., a professor of biological chemistry and otolaryngology and director of the Kresge Hearing Research Institute at the U-M Medical School, has spent decades studying why these drugs cause this “ototoxicity” — a side effect that makes doctors hesitant to prescribe them. Hearing damage has also caused patients to discontinue treatment before their antibiotic prescription is over, potentially allowing drug-resistant strains of bacteria to flourish.
Schacht has found that the drugs produce damaging free radicals inside the hair cells of the inner ear. Hair cells, named for the tiny sound-sensing hairs on their surface, are the linchpin of hearing — and once destroyed, cannot be regrown.
In the new paper, Schacht and his research group joined teams led by University of Zurich microbiologist Erik Böttger, and structural biologist and Nobel Prize winner Venkatraman Ramakrishnan of England’s Medical Research Council Laboratory of Molecular Biology, as well as scientists from ETH Zurich. Each team brought its particular expertise to the issue, and after four years of work they developed and tested this new approach to designing antibiotics.
"Aminoglycosides are some of the most valuable broad-spectrum antibiotics and indispensable drugs today, but we need new options to combat drug-resistant bacteria. Importantly, we must find ways to overcome their ototoxicity," Schacht says. "Instead of the trial-and-error approach of the past, this new hypothesis-driven tactic allows us to design drugs with simultaneous attention toward both antibacterial action and impact on hair cells."
According to the World Health Organization, about 440,000 new cases of multidrug-resistant tuberculosis emerge annually, causing at least 150,000 deaths worldwide. Aminoglycoside antibiotics, while carefully controlled in the U.S., Europe, and other developed countries are available over the counter in many developing nations, leading to overuse that makes it even easier for drug-resistant strains of many kinds of bacteria to emerge and spread.
The new paper outlines a rational approach to designing drugs to combat this threat without ototoxicity, based on a theoretical framework that emerged from the work of the three laboratories and centers around the role of ribosomes, the structures inside the cell that “read” DNA and translate the genetic message into proteins. Böttger’s lab, at the Institut für Medizinische Mikrobiologie which he directs, studies aminoglycoside effects on mitochondrial ribosomes and antibacterial activity with an eye toward designing new ones. Ramakrishnan’s lab studies ribosomes, and partners from ETH Zurich also collaborated.
Aminoglycosides bind to the ribosomes inside bacterial cells, preventing the ability to produce proteins. But while the drugs spare most human ribosomes, they can attach to ribosomes in the mitochondria of cells, which are similar to bacterial ribosomes.
Consistent with U-M-generated theories about ototoxicity, the drugs then cause the production of free radicals in such quantities that they overwhelm the hair cells’ defense mechanisms — destroying the cells and causing hearing loss.
The team’s approach is to design drugs that more specifically target bacterial ribosomes over mitochondrial ribosomes, simultaneously testing the impact on hair cells as well as the ability to kill bacteria. In this way, the researchers try to avoid creating antibiotics that harm hearing.
They are already using the platform employed for this study — which involves cells from mouse ears, and tests of hearing and hair cell damage in guinea pigs — to test other promising novel drugs synthesized based on the theoretical framework that was driving the current research.
Meanwhile, the team hopes to launch a clinical trial of apramycin, an antibiotic that could prove immediately useful because multidrug-resistant TB and lung-infecting bacteria have not shown resistance to the drug yet.
The research also lends more evidence to support the use of antioxidants to protect the hearing of patients taking current aminoglycoside antibiotics. Schacht has already led a clinical trial in China that showed a major reduction in hearing loss if aspirin was given at the same time as aminoglycoside antibiotics. “This kind of protection is important, while we search for the long-term answer to drug resistance without ototoxicity,” he says.
Source: Science Daily
June 11, 2012
Researchers at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital have identified a portion of the brain responsible for determining how far away a sound originates, a process that does not rely solely on how loud the sound is. The investigators’ report, which will appear in the early edition of Proceeding of the National Academy of Sciences, is receiving early online release this week.

This is an image of human cerebral cortex, digitally “inflated” to smooth out normal folds and ridges, showing in red the portion of auditory cortex that responds to the distance from which sounds arrive. Credit: Jyrki Ahveninen, Ph.D., Martinos Center for Biomedical Imaging, Massachusetts General Hospital
"Although sounds get louder when the source approaches us, humans are able to discriminate between loud sounds that come from far away and softer sound from a closer source, suggesting that our brains use distance cues independent of loudness," says Jyrki Ahveninen, PhD, of the Martinos Center, senior author of the PNAS report. "Using functional MRI we found a group of neurons in the auditory cortex sensitive to the distance of sound sources and different from those that process changes in loudness. In addition to providing basic scientific information, our results could help future studies of hearing disorders.”
The human brain has distinct areas for processing sensory information – signals responsible for vision, hearing, taste etc. Studies of the visual cortex, located at the back of the brain, have produced detailed maps of areas handling particular portions of the visual field. But understanding of the auditory cortex, located on the side of the head above and behind the ear, is quite limited. While it is known that the portion of the auditory cortex extending toward the back of the head determines where a sound comes from, exactly how the brain translates complex auditory signals to determine both the location and distance from which a sound originates is not yet known.
In their search for auditory neurons that process sound distance, the research team faced some particular challenges. In research laboratories that study hearing, sounds must be delivered to study participants through headphones, which means the acoustical “space” in which a sound is generated must be simulated. This must be done with exquisite accuracy, since environmental aspects causing sound to reverberate probably contribute to distance perception. Since the MRI equipment itself generates a loud noise, the researchers scanned participants’ brains once every 12 seconds to measure responses to sounds presented during intervening quiet periods.
In the first experiment, study participants – 12 adults with normal hearing – listened to a series of paired sounds of varying degrees of loudness and at simulated distances ranging from 15 to 100 cm and were asked to indicate whether the second sound was closer or farther away than the first. Although the differences in loudness varied randomly, participants were quite accurate in distinguishing the simulated distances of the sounds. Acoustical analysis of the particular sound cues presented indicated that the reverberations produced by a sound, which are more pronounced in a closed environment and for sounds traveling farther, may be more important distance cues than are the differences between sounds perceived by a participant’s two ears.
After the first experiment confirmed the accuracy of the simulated acoustical environment, functional MR images taken while participants listened to another series of paired sounds recorded how activity in the auditory cortex changed in response to sounds of varying loudness and direction as well as during sound of constant levels and silence. The images produced identified a small area that appears to be sensitive to cues indicating distance but not loudness. As far as the investigators know, this is the first time neurons sensitive to sound-source distances have been discovered.
"The identified area is located near other auditory cortical areas that process spatial information," says corresponding author Norbert Kopco, PhD. "This is consistent with a general model of perceptual processing in the brain, suggesting that in hearing, as in vision and other senses, spatial information is processed separately from information about the object’s identity or characteristics such as the musical pitch of sound. Our study also illustrates how important it is to combine expertise from different fields – in our case imaging/physiology, psychology, and computational neuroscience – to advance our understanding of such a complex system as the human brain.”
Provided by Massachusetts General Hospital
Source: medicalxpress.com
ScienceDaily (May 21, 2012) — Seventy-two percent of teenagers participating in a study experienced reduced hearing ability following exposure to a pop rock performance by a popular female singer.

Seventy-two percent of teenagers participating in a study experienced reduced hearing ability following exposure to a pop rock performance by a popular female singer. (Credit: © DWP / Fotolia)
M. Jennifer Derebery, MD, House Clinic physician, along with the House Research Institute tested teens’ hearing before and after a concert and presented the study findings at the American Otologic Society meeting on April 21, 2012. The study has been accepted for publication in an upcoming issue of Otology & Neurotology.
The hearing loss that may be experienced after a pop rock concert is not generally believed to be permanent. It is called a temporary threshold shift and usually disappears within 16-48 hours, after which a person’s hearing returns to previous levels.
“Teenagers need to understand a single exposure to loud noise either from a concert or personal listening device can lead to hearing loss,” said M. Jennifer Derebery, MD, lead author and physician at the House Clinic. “With multiple exposures to noise over 85 decibels, the tiny hair cells may stop functioning and the hearing loss may be permanent.”
In the study, twenty-nine teenagers were given free tickets to a rock concert. To ensure a similar level of noise exposure for the teens, there were two blocks of seats within close range of each other. The seats were located in front of the stage at the far end of the venue approximately 15-18 rows up from the floor.
Parental consent was obtained for all of the underage study participants. The importance of using hearing protection was explained to the teenagers. Researchers then offered hearing protection to the subjects and encouraged them to use the foam ear plugs. However, only three teenagers chose to do so.
Three adult researchers sat with the teenagers. Using a calibrated sound pressure meter, 1,645 measurements of sound decibel (dBA) levels were recorded during the 26 songs played during the three hour concert. The sound levels ranged from 82-110 dBA, with an average of 98.5 dBA. The mean level was greater than 100 dBA for 10 of the 26 songs.
The decibel levels experienced at the concert exceeded what is allowable in the workplace, according to Occupational Safety and Health Administration (OSHA). OSHA safe listening guidelines set time limits for exposures to sound levels of 85 dB and greater in the workplace. The volumes recorded during the concert would have violated OSHA standards in less than 30 minutes. In fact, one third of the teen listeners showed a temporary threshold shift that would not be acceptable in adult workplace environments.
Following the concert, the majority of the study participants also were found to have a significant reduction in the Distortion Product Otoacoustic Emissions (OAE) test. This test checks the function of the tiny outer hair cells in the inner ear that are believed to be the most vulnerable to damage from prolonged noise exposure, and are crucial to normal hearing, the ability to hear soft (or low level sounds), and the ability to understand speech, especially in noisy environments. With exposure to loud noise, the outer hair cells show a reduction in their ability to function, which may later recover. However, it is known that with repeated exposure to loud noise, the tiny hair cells may become permanently damaged. Recent animal research suggests that a single exposure to loud noise may result in permanent damage to the hearing nerve connections themselves that are necessary to hear sound.
Following the concert, 53.6 percent of the teens said they did not think they were hearing as well after the concert. Twenty-five percent reported they were experiencing tinnitus or ringing in their ears, which they did not have before the concert.
Researchers are especially concerned, because in the most recent government survey on health in the United States National Health and Nutrition Examination Survey (NHANES) 2005-2006, 20% of adolescents were found to have at least slight hearing loss, a 31% increase from a similar survey done from 1988-1994.
The findings of the study clearly indicate more research is necessary to determine if the guidelines for noise exposure need to be revised for teenagers. More research is also needed to determine if teenager’s ears are more sensitive to noise than adults.
“It also means we definitely need to be doing more to ensure the sound levels at concerts are not so loud as to cause hearing loss and neurological damage in teenagers, as well as adults,” said Derebery. “Only 3 of our 29 teens chose to use ear protection, even when it was given to them and they were encouraged to do so. We have to assume this is typical behavior for most teen listeners, so we have the responsibility to get the sound levels down to safer levels.”
Researchers recommend teenagers and young adults take an active role in protecting their hearing by utilizing a variety of sound meter ‘apps’ available for smart phones. The sound meters will give a rough estimate of the noise level allowing someone to take the necessary steps to protect their hearing such as wearing ear plugs at a concert.
In addition, Derebery and the study co-authors would like to see concert promoters and the musicians themselves take steps to lower sound levels as well as encourage young concert goers to use hearing protection.
Source: Science Daily
ScienceDaily (May 10, 2012) — Research into hearing loss after exposure to loud noises could lead to the first drug treatments to prevent the development of tinnitus.
Researchers in the University of Leicester’s Department of Cell Physiology and Pharmacology have identified a cellular mechanism that could underlie the development of tinnitus following exposure to loud noises. The discovery could lead to novel tinnitus treatments, and investigations into potential drugs to prevent tinnitus are currently underway.
Tinnitus is a sensation of phantom sounds, usually ringing or buzzing, heard in the ears when no external noise is present. It commonly develops after exposure to loud noises (acoustic over-exposure), and scientists have speculated that it results from damage to nerve cells connected to the ears.
Although hearing loss and tinnitus affect around ten percent of the population, there are currently no drugs available to treat or prevent tinnitus.
University of Leicester researcher Dr Martine Hamann, who led the study published in the journal Hearing Research, said: “We need to know the implications of acoustic over exposure, not only in terms of hearing loss but also what’s happening in the brain and central nervous system. It’s believed that tinnitus results from changes in excitability in cells in the brain — cells become more reactive, in this case more reactive to an unknown sound.”
Dr Hamann and her team, including PhD student Nadia Pilati, looked at cells in an area of the brain called the dorsal cochlear nucleus — the relay carrying signals from nerve cells in the ear to the parts of the brain that decode and make sense of sounds. Following exposure to loud noises, some of the nerve cells (neurons) in the dorsal cochlear nucleus start to fire erratically, and this uncontrolled activity eventually leads to tinnitus.
Dr Hamann said “We showed that exposure to loud sound triggers hearing loss a few days after the exposure to the sound. It also triggers this uncontrolled activity in the neurons of the dorsal cochlear nucleus. This is all happening very quickly, in a matter of days”
In a key breakthrough in collaboration with GSK who sponsored Dr Pilati’s PhD, the team also discovered the specific cellular mechanism that leads to the neurons’ over-activity. Malfunctions in specific potassium channels that help regulate the nerve cell’s electrical activity mean the neurons cannot return to an equilibrium resting state.
Ordinarily, these cells only fire regularly and therefore regularly return to a rest state. However, if the potassium channels are not working properly, the cells cannot return to a rest state and instead fire continuously in random bursts, creating the sensation of constant noise when none exists.
Dr Hamann explained: “In normal conditions the channel helps to drag down the cellular electrical activity to its resting state and this allows the cell to function with a regular pattern. After exposure to loud sound, the channel is functioning less and therefore the cell is constantly active, being unable to reach its resting state and displaying those irregular bursts.”
Although many researchers have investigated the mechanisms underlying tinnitus, this is the first time that cellular bursting activity has been characterised and linked to specific potassium channels. Identifying the potassium channels involved in the early stages of tinnitus opens up new possibilities for preventing tinnitus with early drug treatments.
Dr Hamann’s team is currently investigating potential drugs that could regulate the damaged cells, preventing their erratic firing and returning them to a resting state. If suitable drug compounds are discovered, they could be given to patients who have been exposed to loud noises to protect them against the onset of tinnitus.
These investigations are still in the preliminary stages, and any drug treatment would still be years away.
Source: Science Daily