Posts tagged hearing

Posts tagged hearing
(Image caption: This microscope image of tissue from deep inside a normal mouse ear shows how ribbon synapses (red) form the connections between the hair cells of the inner ear (blue) and the tips of nerve cells (green) that connect to the brain. Credit: Corfas Lab, University of MIchigan)
Scientists Restore Hearing in Noise-Deafened Mice, Pointing Way to New Therapies
Scientists have restored the hearing of mice partly deafened by noise, using advanced tools to boost the production of a key protein in their ears.
By demonstrating the importance of the protein, called NT3, in maintaining communication between the ears and brain, these new findings pave the way for research in humans that could improve treatment of hearing loss caused by noise exposure and normal aging.
In a new paper in the online journal eLife, the team from the University of Michigan Medical School’s Kresge Hearing Research Institute and Harvard University report the results of their work to understand NT3’s role in the inner ear, and the impact of increased NT3 production on hearing after a noise exposure.
Their work also illustrates the key role of cells that have traditionally been seen as the “supporting actors” of the ear-brain connection. Called supporting cells, they form a physical base for the hearing system’s “stars”: the hair cells in the ear that interact directly with the nerves that carry sound signals to the brain. This new research identifies the critical role of these supporting cells along with the NT3 molecules that they produce.
NT3 is crucial to the body’s ability to form and maintain connections between hair cells and nerve cells, the researchers demonstrate. This special type of connection, called a ribbon synapse, allows extra-rapid communication of signals that travel back and forth across tiny gaps between the two types of cells.
“It has become apparent that hearing loss due to damaged ribbon synapses is a very common and challenging problem, whether it’s due to noise or normal aging,” says Gabriel Corfas, Ph.D., who led the team and directs the U-M institute. “We began this work 15 years ago to answer very basic questions about the inner ear, and now we have been able to restore hearing after partial deafening with noise, a common problem for people. It’s very exciting.”
Using a special genetic technique, the researchers made it possible for some mice to produce additional NT3 in cells of specific areas of the inner ear after they were exposed to noise loud enough to reduce hearing. Mice with extra NT3 regained their ability to hear much better than the control mice.
Now, says Corfas, his team will explore the role of NT3 in human ears, and seek drugs that might boost NT3 action or production. While the use of such drugs in humans could be several years away, the new discovery gives them a specific target to pursue.
Corfas, a professor and associate chair in the U-M Department of Otolaryngology, worked on the research with first author Guoqiang Wan, Ph.D., Maria E. Gómez-Casati, Ph.D., and others in his former institution, Harvard. Some of the authors now work with Corfas in his new U-M lab. They set out to find out how ribbon synapses – which are found only in the ear and eye – form, and what molecules are important to their formation and maintenance.
Anyone who has experienced problems making out the voice of the person next to them in a crowded room has felt the effects of reduced ribbon synapses. So has anyone who has experienced temporary reduction in hearing after going to a loud concert. The damage caused by noise – over a lifetime or just one evening – reduces the ability of hair cells to talk to the brain via ribbon synapse connections with nerve cells.
Targeted genetics made discovery possible
After determining that inner ear supporting cells supply NT3, the team turned to a technique called conditional gene recombination to see what would happen if they boosted NT3 production by the supporting cells. The approach allows scientists to activate genes in specific cells, by giving a dose of a drug that triggers the cell to “read” extra copies of a gene that had been inserted into them. For this research, the scientists activated the extra NT3 genes only into the inner ear’s supporting cells.
The genes didn’t turn on until the scientists wanted them to – either before or after they exposed the mice to loud noises. The scientists turned on the NT3 genes by giving a dose of the drug tamoxifen, which triggered the supporting cells to make more of the protein. Before and after this step, they tested the mice’s hearing using an approach called auditory brainstem response or ABR – the same test used on humans.
The result: the mice with extra NT3 regained their hearing over a period of two weeks, and were able to hear much better than mice without the extra NT3 production. The scientists also did the same with another nerve cell growth factor, or neurotrophin, called BDNF, but did not see the same effect on hearing.
Next steps
Now that NT3’s role in making and maintaining ribbon synapses has become clear, Corfas says the next challenge is to study it in human ears, and to look for drugs that can work like NT3 does. Corfas has some drug candidates in mind, and hopes to partner with industry to look for others.
Boosting NT3 production through gene therapy in humans could also be an option, he says, but a drug-based approach would be simpler and could be administered as long as it takes to restore hearing.
Corfas notes that the mice in the study were not completely deafened, so it’s not yet known if boosting NT3 activity could restore hearing that has been entirely lost. He also notes that the research may have implications for other diseases in which nerve cell connections are lost – called neurodegenerative diseases. “This brings supporting cells into the spotlight, and starts to show how much they contribute to plasticity, development and maintenance of neural connections,” he says.
Infant Cooing, Babbling Linked to Hearing Ability
Infants’ vocalizations throughout the first year follow a set of predictable steps from crying and cooing to forming syllables and first words. However, previous research had not addressed how the amount of vocalizations may differ between hearing and deaf infants. Now, University of Missouri research shows that infant vocalizations are primarily motivated by infants’ ability to hear their own babbling. Additionally, infants with profound hearing loss who received cochlear implants to help correct their hearing soon reached the vocalization levels of their hearing peers, putting them on track for language development.
“Hearing is a critical aspect of infants’ motivation to make early sounds,” said Mary Fagan, an assistant professor in the Department of Communication Science and Disorders in the MU School of Health Professions. “This study shows babies are interested in speech-like sounds and that they increase their babbling when they can hear.”
Fagan studied the vocalizations of 27 hearing infants and 16 infants with profound hearing loss who were candidates for cochlear implants, which are small electronic devices embedded into the bone behind the ear that replace some functions of the damaged inner ear. She found that infants with profound hearing loss vocalized significantly less than hearing infants. However, when the infants with profound hearing loss received cochlear implants, the infants’ vocalizations increased to the same levels as their hearing peers within four months of receiving the implants.
“After the infants received their cochlear implants, the significant difference in overall vocalization quantity was no longer evident,” Fagan said. “These findings support the importance of early hearing screenings and early cochlear implantation.”
Fagan found that non-speech-like sounds such as crying, laughing and raspberry sounds, were not affected by infants’ hearing ability. She says this finding highlights babies are more interested in speech-like sounds since they increase their production of those sounds such as babbling when they can hear.
“Babies learn so much through sound in the first year of their lives,” Fagan said. “We know learning from others is important to infants’ development, but hearing allows infants to explore their own vocalizations and learn through their own capacity to produce sounds.”
In future research, Fagan hopes to study whether infants explore the sounds of objects such as musical toys to the same degree they explore vocalization.
Fagan’s research, “Frequency of vocalization before and after cochlear implantation: Dynamic effect of auditory feedback on infant behavior,” was published in the Journal of Experimental Child Psychology.
(Image caption: The hair cells of mice missing just Hey2 are neatly lined up in four rows (left) while those missing Hey1 and Hey2 are disorganized (right). The cells’ hairlike protrusions (pink) can be misoriented, too. Credit: Angelika Doetzlhofer)
Hey1 and Hey2 ensure inner ear ‘hair cells’ are made at the right time, in the right place
Two Johns Hopkins neuroscientists have discovered the “molecular brakes” that time the generation of important cells in the inner ear cochleas of mice. These “hair cells” translate sound waves into electrical signals that are carried to the brain and are interpreted as sounds. If the arrangement of the cells is disordered, hearing is impaired.
A summary of the research will be published in The Journal of Neuroscience on Sept. 16.
"The proteins Hey1 and Hey2 act as brakes to prevent hair cell generation until the time is right," says Angelika Doetzlhofer, Ph.D., an assistant professor of neuroscience. "Without them, the hair cells end up disorganized and dysfunctional."
The cochlea is a coiled, fluid-filled structure bordered by a flexible membrane that vibrates when sound waves hit it. This vibration is passed through the fluid in the cochlea and sensed by specialized hair cells that line the tissue in four precise rows. Their name comes from the cells’ hairlike protrusions that detect movement of the cochlear fluid and create electrical signals that relay the sound to the brain.
During development, “parent cells” within the cochlea gradually differentiate into hair cells in a precise sequence, starting with the cells at the base of the cochlea and progressing toward its tip. The signaling protein Sonic Hedgehog was known to be released by nearby nerve cells in a time- and space-dependent pattern that matches that of hair cell differentiation. But the mechanism of Sonic Hedgehog’s action was unclear.
Doetzlhofer and postdoctoral fellow Ana Benito Gonzalez bred mice whose inner ear cells were missing Hey1 and Hey2, two genes known to be active in the parent cells but turned off in hair cells. They found that, without those genes, the cells were generated too early and were abnormally patterned: Rows of hair cells were either too many or too few, and their hairlike protrusions were often deformed and pointing in the wrong direction.
"While these mice didn’t live long enough for us to test their hearing, we know from other studies that mice with disorganized hair cell patterns have serious hearing problems," says Doetzlhofer.
Further experiments demonstrated the role of Sonic Hedgehog in regulating the two key genes.
"Hey1 and Hey2 stop the parent cells from turning into hair cells until the time is right," explains Doetzlhofer. "Sonic Hedgehog applies those ‘brakes,’ then slowly releases pressure on them as the cochlea develops. If the brakes stop working, the hair cells are generated too early and end up misaligned."
She adds that Sonic Hedgehog, Hey1 and Hey2 are found in many other parent cell types throughout the developing nervous system and may play similar roles in timing the generation of other cell types.
Stop and Listen: Study Shows How Movement Affects Hearing
When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. This strategy helps us hear better by preventing unwanted sounds generated by our own movements.
This interplay between movement and hearing also has a counterpart deep in the brain. Indeed, indirect evidence has long suggested that the brain’s motor cortex, which controls movement, somehow influences the auditory cortex, which gives rise to our conscious perception of sound.
A new Duke study, appearing online August 27 in Nature, combines cutting-edge methods in electrophysiology, optogenetics and behavioral analysis to reveal exactly how the motor cortex, seemingly in anticipation of movement, can tweak the volume control in the auditory cortex.
The new lab methods allowed the group to “get beyond a century’s worth of very powerful but largely correlative observations, and develop a new, and really a harder, causality-driven view of how the brain works,” said the study’s senior author Richard Mooney Ph.D., a professor of neurobiology at Duke University School of Medicine, and a member of the Duke Institute for Brain Sciences.
The findings contribute to the basic knowledge of how communication between the brain’s motor and auditory cortexes might affect hearing during speech or musical performance. Disruptions to the same circuitry may give rise to auditory hallucinations in people with schizophrenia.
In 2013, researchers led by Mooney first characterized the connections between motor and auditory areas in mouse brain slices as well as in anesthetized mice. The new study answers the critical question of how those connections operate in an awake, moving mouse.
"This is a major step forward in that we’ve now interrogated the system in an animal that’s freely behaving," said David Schneider, a postdoctoral associate in Mooney’s lab.
Mooney suspects that the motor cortex learns how to mute responses in the auditory cortex to sounds that are expected to arise from one’s own movements while heightening sensitivity to other, unexpected sounds. The group is testing this idea.
"Our first step will be to start making more realistic situations where the animal needs to ignore the sounds that its movements are making in order to detect things that are happening in the world," Schneider said.
In the latest study, the team recorded electrical activity of individual neurons in the brain’s auditory cortex. Whenever the mice moved — walking, grooming, or making high-pitched squeaks — neurons in their auditory cortex were dampened in response to tones played to the animals, compared to when they were at rest.
To find out whether movement was directly influencing the auditory cortex, researchers conducted a series of experiments in awake animals using optogenetics, a powerful method that uses light to control the activity of select populations of neurons that have been genetically sensitized to light. Like the game of telephone, sounds that enter the ear pass through six or more relays in the brain before reaching the auditory cortex.
"Optogenetics can be used to activate a specific relay in the network, in this case the penultimate node that relays signals to the auditory cortex," Mooney said.
About half of the suppression during movement was found to originate within the auditory cortex itself. “That says a lot of modulation is going on in the auditory cortex, and not just at earlier relays in the auditory system” Mooney said.
More specifically, the team found that movement stimulates inhibitory neurons that in turn suppress the response of the auditory cortex to tones.
The researchers then wondered what turns on the inhibitory neurons. The suspects were many. “The auditory cortex is like this giant switching station where all these different inputs come through and say, ‘Okay, I want to have access to these interneurons,’” Mooney said. “The question we wanted to answer is who gets access to them during movement?”
The team knew from previous experiments that neuronal projections from the secondary motor cortex (M2) modulate the auditory cortex. But to isolate M2’s relative contribution — something not possible with traditional electrophysiology — the researchers again used optogenetics, this time to switch on and off the M2’s inputs to the inhibitory neurons.
Turning on M2 inputs reproduced a sense of movement in the auditory cortex, even in mice that were resting, the group found. “We were sending a ‘Hey I’m moving’ signal to the auditory cortex,” Schneider said. Then the effect of playing a tone on the auditory cortex was much the same as if the animal had actually been moving — a result that confirmed the importance of M2 in modulating the auditory cortex. On the other hand, turning off M2 simulated rest in the auditory cortex, even when the animals were still moving.
"I couldn’t contain my excitement when we first saw that result," said Anders Nelson, a neurobiology graduate student in Mooney’s group.
New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound
Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.
In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.
To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.
With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”
Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.
The ear is an important organ that allows us to perceive the world around us. However, very few of us are aware that not only the ear cup but also our skull bone can receive and conduct sounds. Tatjana Tchumatchenko from the Max Planck Institute for Brain Research in Frankfurt and Tobias Reichenbach from Imperial College London have now developed a new model explaining how the vibrations of the surrounding bone and the basilar membrane are coupled. These new results can be important for the development of new headphones and hearing devices.
Our sense of hearing, which is the ability to perceive sounds, arises exclusively in the inner ear. When sound waves travel through the air and reach our ear canal they cause different regions of the basilar membrane in the inner ear to vibrate. Which regions of the membrane they vibrate depends on their frequency. It is these microscopic vibrations of the membrane that we perceive as sound. However, the inner ear is surrounded by a bone that can also vibrate.
With the help of fluid dynamics calculations Tchumatchenko and Reichenbach have now discovered that the vibrations of the bone and basilar membrane are coupled. In other words, they can also mutually excite each other.
This gives rise to fascinating phenomena which, thanks to the new model, can now be understood: For example, two sounds with slightly different frequencies that arrive in the inner ear at the same time can overlap and excite the same regions on the basilar membrane. In this case, combination tones, or so-called otoacoustic emissions, are produced in the inner ear through the nonlinearity of the membrane. Precisely how these sounds leave the inner ear and how they spread inside the cochlea is currently a matter of scientific debate. “In our study we have shown that the combination tones can leave the inner ear in the form of a fast wave along the bone surface, and not, as previously assumed, by a wave along the basilar membrane,” explains Tatjana Tchumatchenko from the Max Planck Institute for Brain Research.
Moreover, the new model proves that the travelling waves along the basilar membrane can be generated by both the vibrations of the cochlear bone and the vibrations of the air inside the ear canal. “Our results provide an elegant explanation for this long-known but poorly understood observation,” says Tobias Reichenbach from Imperial College London.
These results will help advance our understanding of the complex interaction between the dynamics of fluids and the mechanics of the bone. This understanding can prove essential for ever more fascinating future clinical and commercial applications of bone conduction, such new-generation hearing aids and combinations between headphones and glasses.
People with tinnitus process emotions differently from their peers
Patients with persistent ringing in the ears – a condition known as tinnitus – process emotions differently in the brain from those with normal hearing, researchers report in the journal Brain Research.
Tinnitus afflicts 50 million people in the United States, according to the American Tinnitus Association, and causes those with the condition to hear noises that aren’t really there. These phantom sounds are not speech, but rather whooshing noises, train whistles, cricket noises or whines. Their severity often varies day to day.
University of Illinois speech and hearing science professor Fatima Husain, who led the study, said previous studies showed that tinnitus is associated with increased stress, anxiety, irritability and depression, all of which are affiliated with the brain’s emotional processing systems.
“Obviously, when you hear annoying noises constantly that you can’t control, it may affect your emotional processing systems,” Husain said. “But when I looked at experimental work done on tinnitus and emotional processing, especially brain imaging work, there hadn’t been much research published.”
She decided to use functional magnetic resonance imaging (fMRI) brain scans to better understand how tinnitus affects the brain’s ability to process emotions. These scans show the areas of the brain that are active in response to stimulation, based upon blood flow to those areas.
Three groups of participants were used in the study: people with mild-to-moderate hearing loss and mild tinnitus; people with mild-to-moderate hearing loss without tinnitus; and a control group of age-matched people without hearing loss or tinnitus. Each person was put in an fMRI machine and listened to a standardized set of 30 pleasant, 30 unpleasant and 30 emotionally neutral sounds (for example, a baby laughing, a woman screaming and a water bottle opening). The participants pressed a button to categorize each sound as pleasant, unpleasant or neutral.
The tinnitus and normal-hearing groups responded more quickly to emotion-inducing sounds than to neutral sounds, while patients with hearing loss had a similar response time to each category of sound. Over all, the tinnitus group’s reaction times were slower than the reaction times of those with normal hearing.
Activity in the amygdala, a brain region associated with emotional processing, was lower in the tinnitus and hearing-loss patients than in people with normal hearing. Tinnitus patients also showed more activity than normal-hearing people in two other brain regions associated with emotion, the parahippocampus and the insula. The findings surprised Husain.
“We thought that because people with tinnitus constantly hear a bothersome, unpleasant stimulus, they would have an even higher amount of activity in the amygdala when hearing these sounds, but it was lesser,” she said. “Because they’ve had to adjust to the sound, some plasticity in the brain has occurred. They have had to reduce this amygdala activity and reroute it to other parts of the brain because the amygdala cannot be active all the time due to this annoying sound.”
Because of the sheer number of people who suffer from tinnitus in the United States, a group that includes many combat veterans, Husain hopes her group’s future research will be able to increase tinnitus patients’ quality of life.
“It’s a communication issue and a quality-of-life issue,” she said. “We want to know how we can get better in the clinical realm. Audiologists and clinicians are aware that tinnitus affects emotional aspects, too, and we want to make them aware that these effects are occurring so they can better help their patients.”
Hearing protein required to convert sound into brain signals
A specific protein found in the bridge-like structures that make up part of the auditory machinery of the inner ear is essential for hearing. The absence of this protein or impairment of the gene that codes for this protein leads to profound deafness in mice and humans, respectively, reports a team of researchers in the journal EMBO Molecular Medicine.
“The goal of our study was to identify which isoform of protocadherin-15 forms the tip-links, the essential connections of the auditory mechanotransduction machinery within mature hair cells that are needed to convert sound into electrical signals,” remarks Christine Petit, the lead author of the study and Professor at the Institut Pasteur in Paris and at Collège de France.
Three types of protocadherin-15 are known to exist in auditory sensory cells of the inner ear but it was not clear which of these protein isoforms was essential for hearing. “Our work pinpoints the CD2 isoform of protocadherin-15 as an essential component of the tip-link and reveals that the absence of protocadherin-15 CD2 in mouse hair cells results in profound deafness.”
Within the hair bundle, the sensory antenna of auditory sensory cells, the tip-link is a bridge-like structure that when stretched can activate the ion channel responsible for generating electrical signals from sound. Tension in the tip-link created by sound stimulation opens this channel of unknown molecular composition thus generating electrical signals and, ultimately, the perception of sound.
The researchers engineered mice that lack only the CD2 isoform of protocadherin-15 exclusively during adulthood. While the absence of this isoform led to profound deafness, the lack of the other protocadherin-15 isoforms in mice did not affect their hearing.
Patients who carry a mutation in the gene encoding protocadherin-15 are affected by a rare devastating disorder, Usher syndrome, which is characterized by profound deafness, balance problems and gradual visual loss due to retinitis pigmentosa. In a separate approach, the scientists also sequenced the genes of 60 patients who had profound deafness without balance and visual impairment. Three of these patients were shown to have mutations specifically affecting protocadherin-15 CD2. “The demonstration of a requirement for protocadherin-15 CD2 for hearing not only in mice but also in humans constitutes a major step in the objective of deciphering the components of the auditory mechanotransduction machinery. This isoform can be used as a starting point to identify the other components of the auditory machinery. By focusing our attention on the CD2 isoform of protocadherin-15, we can now consider developing gene therapy strategies for deafness caused by defects in this gene,” says EMBO Member Christine Petit.
The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard Medical School and Harvard University programmed a new type of game that trained both mice and humans to enhance their ability to discriminate soft sounds in noisy backgrounds. Their findings will be published in PNAS Online Early Edition the week of June 9-13, 2014.

In the experiment, adult humans and mice with normal hearing were trained on a rudimentary ‘audiogame’ inspired by sensory foraging behavior that required them to discriminate changes in the loudness of a tone presented in a moderate level of background noise. Their findings suggest new therapeutic options for clinical populations that receive little benefit from conventional sensory rehabilitation strategies.
“Like the children’s game ‘hot and cold’, our game provided instantaneous auditory feedback that allowed our human and mouse subjects to hone in on the location of a hidden target,” said senior author Daniel Polley, Ph.D., director of the Mass. Eye and Ear’s Amelia Peabody Neural Plasticity Unit of the Eaton-Peabody Laboratories and assistant professor of otology and laryngology at Harvard Medical School. “Over the course of training, both species learned adaptive search strategies that allowed them to more efficiently convert noisy, dynamic audio cues into actionable information for finding the target. To our surprise, human subjects who mastered this simple game over the course of 30 minutes of daily training for one month exhibited a generalized improvement in their ability to understand speech in noisy background conditions. Comparable improvements in the processing of speech in high levels of background noise were not observed for control subjects who heard the sounds of the game but did not actually play the game.”
The researchers recorded the electrical activity of neurons in auditory regions of the mouse cerebral cortex to gain some insight into how training might have boosted the ability of the brain to separate signal from noise. They found that training substantially altered the way the brain encoded sound.
In trained mice, many neurons became highly sensitive to faint sounds that signaled the location of the target in the game. Moreover, neurons displayed increased resistance to noise suppression; they retained an ability to encode faint sounds even under conditions of elevated background noise.
“Again, changes of this ilk were not observed in control mice that watched (and listened) to their counterparts play the game. Active participation in the training was required; passive listening was not enough,” Dr. Polley said.
These findings illustrate the utility of brain training exercises that are inspired by careful neuroscience research. “When combined with conventional assistive devices such as hearing aids or cochlear implants, ‘audiogames’ of the type we describe here may be able to provide the hearing impaired with an improved ability to reconnect to the auditory world. Of particular interest is the finding that brain training improved speech processing in noisy backgrounds – a listening environment where conventional hearing aids offer limited benefit,” concluded Dr. Jonathon Whitton, lead author on the paper. Dr. Whitton is a principal investigator at the Amelia Peabody Neural Plasticity Unit and affiliated with the Program in Speech Hearing Bioscience and Technology, Harvard–Massachusetts Institute of Technology Division of Health, Sciences, and Technology.
(Source: masseyeandear.org)
‘Seeing is believing’, so the idiom goes, but new research suggests vision also involves a bit of hearing.

Scientists studying brain process involved in sight have found the visual cortex also uses information gleaned from the ears as well as the eyes when viewing the world.
They suggest this auditory input enables the visual system to predict incoming information and could confer a survival advantage.
Professor Lars Muckli, of the Institute of Neuroscience and Psychology at the University of Glasgow, who led the research, said: “Sounds create visual imagery, mental images, and automatic projections.
“So, for example, if you are in a street and you hear the sound of an approaching motorbike, you expect to see a motorbike coming around the corner. If it turned out to be a horse, you’d be very surprised.”
The study, published in the journal Current Biology, involved conducting five different experiments using functional Magnetic Resonance Imaging (fMRI) to examine the activity in the early visual cortex in 10 volunteer subjects.
In one experiment they asked the blindfolded volunteers to listen to three different sounds – birdsong, traffic noise and a talking crowd.
Using a special algorithm that can identify unique patterns in brain activity, the researchers were able to discriminate between the different sounds being processed in early visual cortex activity.
A second experiment revealed even imagined images, in the absence of both sight and sound, evoked activity in the early visual cortex.
Lars Muckli said: “This research enhances our basic understanding of how interconnected different regions of the brain are. The early visual cortex hasn’t previously been known to process auditory information, and while there is some anatomical evidence of interconnectedness in monkeys, our study is the first to clearly show a relationship in humans.
“In future we will test how this auditory information supports visual processing, but the assumption is it provides predictions to help the visual system to focus on surprising events which would confer a survival advantage.
“This might provide insights into mental health conditions such as schizophrenia or autism and help us understand how sensory perceptions differ in these individuals.”
(Source: gla.ac.uk)