Posts tagged inner ear

Posts tagged inner ear
(Image caption: The hair cells of mice missing just Hey2 are neatly lined up in four rows (left) while those missing Hey1 and Hey2 are disorganized (right). The cells’ hairlike protrusions (pink) can be misoriented, too. Credit: Angelika Doetzlhofer)
Hey1 and Hey2 ensure inner ear ‘hair cells’ are made at the right time, in the right place
Two Johns Hopkins neuroscientists have discovered the “molecular brakes” that time the generation of important cells in the inner ear cochleas of mice. These “hair cells” translate sound waves into electrical signals that are carried to the brain and are interpreted as sounds. If the arrangement of the cells is disordered, hearing is impaired.
A summary of the research will be published in The Journal of Neuroscience on Sept. 16.
"The proteins Hey1 and Hey2 act as brakes to prevent hair cell generation until the time is right," says Angelika Doetzlhofer, Ph.D., an assistant professor of neuroscience. "Without them, the hair cells end up disorganized and dysfunctional."
The cochlea is a coiled, fluid-filled structure bordered by a flexible membrane that vibrates when sound waves hit it. This vibration is passed through the fluid in the cochlea and sensed by specialized hair cells that line the tissue in four precise rows. Their name comes from the cells’ hairlike protrusions that detect movement of the cochlear fluid and create electrical signals that relay the sound to the brain.
During development, “parent cells” within the cochlea gradually differentiate into hair cells in a precise sequence, starting with the cells at the base of the cochlea and progressing toward its tip. The signaling protein Sonic Hedgehog was known to be released by nearby nerve cells in a time- and space-dependent pattern that matches that of hair cell differentiation. But the mechanism of Sonic Hedgehog’s action was unclear.
Doetzlhofer and postdoctoral fellow Ana Benito Gonzalez bred mice whose inner ear cells were missing Hey1 and Hey2, two genes known to be active in the parent cells but turned off in hair cells. They found that, without those genes, the cells were generated too early and were abnormally patterned: Rows of hair cells were either too many or too few, and their hairlike protrusions were often deformed and pointing in the wrong direction.
"While these mice didn’t live long enough for us to test their hearing, we know from other studies that mice with disorganized hair cell patterns have serious hearing problems," says Doetzlhofer.
Further experiments demonstrated the role of Sonic Hedgehog in regulating the two key genes.
"Hey1 and Hey2 stop the parent cells from turning into hair cells until the time is right," explains Doetzlhofer. "Sonic Hedgehog applies those ‘brakes,’ then slowly releases pressure on them as the cochlea develops. If the brakes stop working, the hair cells are generated too early and end up misaligned."
She adds that Sonic Hedgehog, Hey1 and Hey2 are found in many other parent cell types throughout the developing nervous system and may play similar roles in timing the generation of other cell types.
The ear is an important organ that allows us to perceive the world around us. However, very few of us are aware that not only the ear cup but also our skull bone can receive and conduct sounds. Tatjana Tchumatchenko from the Max Planck Institute for Brain Research in Frankfurt and Tobias Reichenbach from Imperial College London have now developed a new model explaining how the vibrations of the surrounding bone and the basilar membrane are coupled. These new results can be important for the development of new headphones and hearing devices.
Our sense of hearing, which is the ability to perceive sounds, arises exclusively in the inner ear. When sound waves travel through the air and reach our ear canal they cause different regions of the basilar membrane in the inner ear to vibrate. Which regions of the membrane they vibrate depends on their frequency. It is these microscopic vibrations of the membrane that we perceive as sound. However, the inner ear is surrounded by a bone that can also vibrate.
With the help of fluid dynamics calculations Tchumatchenko and Reichenbach have now discovered that the vibrations of the bone and basilar membrane are coupled. In other words, they can also mutually excite each other.
This gives rise to fascinating phenomena which, thanks to the new model, can now be understood: For example, two sounds with slightly different frequencies that arrive in the inner ear at the same time can overlap and excite the same regions on the basilar membrane. In this case, combination tones, or so-called otoacoustic emissions, are produced in the inner ear through the nonlinearity of the membrane. Precisely how these sounds leave the inner ear and how they spread inside the cochlea is currently a matter of scientific debate. “In our study we have shown that the combination tones can leave the inner ear in the form of a fast wave along the bone surface, and not, as previously assumed, by a wave along the basilar membrane,” explains Tatjana Tchumatchenko from the Max Planck Institute for Brain Research.
Moreover, the new model proves that the travelling waves along the basilar membrane can be generated by both the vibrations of the cochlear bone and the vibrations of the air inside the ear canal. “Our results provide an elegant explanation for this long-known but poorly understood observation,” says Tobias Reichenbach from Imperial College London.
These results will help advance our understanding of the complex interaction between the dynamics of fluids and the mechanics of the bone. This understanding can prove essential for ever more fascinating future clinical and commercial applications of bone conduction, such new-generation hearing aids and combinations between headphones and glasses.
Hearing protein required to convert sound into brain signals
A specific protein found in the bridge-like structures that make up part of the auditory machinery of the inner ear is essential for hearing. The absence of this protein or impairment of the gene that codes for this protein leads to profound deafness in mice and humans, respectively, reports a team of researchers in the journal EMBO Molecular Medicine.
“The goal of our study was to identify which isoform of protocadherin-15 forms the tip-links, the essential connections of the auditory mechanotransduction machinery within mature hair cells that are needed to convert sound into electrical signals,” remarks Christine Petit, the lead author of the study and Professor at the Institut Pasteur in Paris and at Collège de France.
Three types of protocadherin-15 are known to exist in auditory sensory cells of the inner ear but it was not clear which of these protein isoforms was essential for hearing. “Our work pinpoints the CD2 isoform of protocadherin-15 as an essential component of the tip-link and reveals that the absence of protocadherin-15 CD2 in mouse hair cells results in profound deafness.”
Within the hair bundle, the sensory antenna of auditory sensory cells, the tip-link is a bridge-like structure that when stretched can activate the ion channel responsible for generating electrical signals from sound. Tension in the tip-link created by sound stimulation opens this channel of unknown molecular composition thus generating electrical signals and, ultimately, the perception of sound.
The researchers engineered mice that lack only the CD2 isoform of protocadherin-15 exclusively during adulthood. While the absence of this isoform led to profound deafness, the lack of the other protocadherin-15 isoforms in mice did not affect their hearing.
Patients who carry a mutation in the gene encoding protocadherin-15 are affected by a rare devastating disorder, Usher syndrome, which is characterized by profound deafness, balance problems and gradual visual loss due to retinitis pigmentosa. In a separate approach, the scientists also sequenced the genes of 60 patients who had profound deafness without balance and visual impairment. Three of these patients were shown to have mutations specifically affecting protocadherin-15 CD2. “The demonstration of a requirement for protocadherin-15 CD2 for hearing not only in mice but also in humans constitutes a major step in the objective of deciphering the components of the auditory mechanotransduction machinery. This isoform can be used as a starting point to identify the other components of the auditory machinery. By focusing our attention on the CD2 isoform of protocadherin-15, we can now consider developing gene therapy strategies for deafness caused by defects in this gene,” says EMBO Member Christine Petit.
A precise rhythm of electrical impulses transmitted from cells in the inner ear coaches the brain how to hear, according to a new study led by researchers at the University of Pittsburgh School of Medicine. They report the first evidence of this developmental process today in the online version of Neuron.

The ear generates spontaneous electrical activity to trigger a response in the brain before hearing actually begins, said senior investigator Karl Kandler, Ph.D., professor of otolaryngology and neurobiology, Pitt School of Medicine. These patterned bursts start at inner hair cells in the cochlea, which is part of the inner ear, and travel along the auditory nerve to the brain.
"It’s long been speculated that these impulses are intended to ‘wire’ the brain auditory centers," he said. "Until now, however, no one has been able to provide experimental evidence to support this concept."
To map neural connectivity, Dr. Kandler’s team prepared sections of a mouse brain containing the auditory pathways in a chemical that is inert until UV light hits it. Then, they pulsed laser light at a neuron, making the chemical active, which excites the nerve cells to generate an electrical impulse. They then tracked the spread of the impulse to adjacent cells, allowing them to map the network a neuron at a time.
All mice are born unable to hear, a sense that develops around two weeks after birth. But even before hearing starts, the ear produces rhythmic bursts of electrical activity which causes a broad reaction in the brain’s auditory processing centers. As the beat goes on, the brain organizes itself, pruning unneeded connections and strengthening others. To investigate whether the beat is indeed important for this reorganization, the team used genetically engineered mice that lack a key receptor on the inner hair cells which causes them to change their beat.
"In normal mice, the wiring diagram of the brain gets sharper and more efficient over time and they begin to hear," Dr. Kandler said. "But this doesn’t happen when the inner ear beats in a different rhythm, which means the brain isn’t getting the instructions it needs to wire itself correctly. We have evidence that these mice can detect sound, but they have problems perceiving the pitch of sounds."
In humans, such subtle hearing deficits are associated with Central Auditory-Processing Disorders (CAPD), difficulty processing the meaning of sound. About 2 to 3 percent of children are affected with CAPD and these children often have speech and language disorders or delays, and learning disabilities such as dyslexia. In contrast to causes of hearing impairments due to ear deficits, the causes underlying CAPD have remained obscure.
"Our findings suggest that an abnormal rhythm of electrical impulses early in life may be an important contributing factor in the development of CAPD. More research is needed to find out whether this also holds true for humans, but our results point to a new direction that is worth following up," Dr. Kandler said.
(Source: eurekalert.org)
Nanopores underlie our ability to tune in to a single voice
Inner-ear membrane uses tiny pores to mechanically separate sounds, researchers find.
Even in a crowded room full of background noise, the human ear is remarkably adept at tuning in to a single voice — a feat that has proved remarkably difficult for computers to match. A new analysis of the underlying mechanisms, conducted by researchers at MIT, has provided insights that could ultimately lead to better machine hearing, and perhaps to better hearing aids as well.
Our ears’ selectivity, it turns out, arises from evolution’s precise tuning of a tiny membrane, inside the inner ear, called the tectorial membrane. The viscosity of this membrane — its firmness, or lack thereof — depends on the size and distribution of tiny pores, just a few tens of nanometers wide. This, in turn, provides mechanical filtering that helps to sort out specific sounds.
The new findings are reported in the Biophysical Journal by a team led by MIT graduate student Jonathan Sellon, and including research scientist Roozbeh Ghaffari, former graduate student Shirin Farrahi, and professor of electrical engineering Dennis Freeman. The team collaborated with biologist Guy Richardson of the University of Sussex.
Elusive understanding
In discriminating among competing sounds, the human ear is “extraordinary compared to conventional speech- and sound-recognition technologies,” Freeman says. The exact reasons have remained elusive — but the importance of the tectorial membrane, located inside the cochlea, or inner ear, has become clear in recent years, largely through the work of Freeman and his colleagues. Now it seems that a flawed assumption contributed to the longstanding difficulty in understanding the importance of this membrane.
Much of our ability to differentiate among sounds is frequency-based, Freeman says — so researchers had “assumed that the better we could resolve frequency, the better we could hear.” But this assumption turns out not always to be true.
In fact, Freeman and his co-authors previously found that tectorial membranes with a certain genetic defect are actually highly sensitive to variations in frequency — and the result is worse hearing, not better.
The MIT team found “a fundamental tradeoff between how well you can resolve different frequencies and how long it takes to do it,” Freeman explains. That makes the finer frequency discrimination too slow to be useful in real-world sound selectivity.
Too fast for neurons
Previous work by Freeman and colleagues has shown that the tectorial membrane plays a fundamental role in sound discrimination by carrying waves that stimulate a particular kind of sensory receptor. This process is essential in deciphering competing sounds, but it takes place too quickly for neural processes to keep pace. Nature, over the course of evolution, appears to have produced a very effective electromechanical system, Freeman says, that can keep up with the speed of these sound waves.
The new work explains how the membrane’s structure determines how well it filters sound. The team studied two genetic variants that cause nanopores within the tectorial membrane to be smaller or larger than normal. The pore size affects the viscosity of the membrane and its sensitivity to different frequencies.
The tectorial membrane is spongelike, riddled with tiny pores. By studying how its viscosity varies with pore size, the team was able to determine that the typical pore size observed in mice — about 40 nanometers across — represents an optimal size for combining frequency discrimination with overall sensitivity. Pores that are larger or smaller impair hearing.
“It really changes the way we think about this structure,” Ghaffari says. The new findings show that fluid viscosity and pores are actually essential to its performance. Changing the sizes of tectorial membrane nanopores, via biochemical manipulation or other means, can provide unique ways to alter hearing sensitivity and frequency discrimination.
William Brownell, a professor of otolaryngology at Baylor College of Medicine, says, “This is the first study to suggest that porosity may affect cochlear tuning.” This work, he adds, “could provide insight” into the development of specific hearing problems.
Researchers at University of Colorado School of Medicine may have figured out what causes Meniere’s disease and how to attack it. According to Carol Foster, MD, from the department of otolaryngology and Robert Breeze, MD, a neurosurgeon, there is a strong association between Meniere’s disease and conditions involving temporary low blood flow in the brain such as migraine headaches.

Meniere’s affects approximately 3 to 5 million people in the United States. It is a disabling disorder resulting in repeated violent attacks of dizziness, ringing in the ear and hearing loss that can last for hours and can ultimately cause permanent deafness in the affected ear. Up until now, the cause of the attacks has been unknown, with no theory fully explaining the many symptoms and signs of the disorder.
"If our hypothesis is confirmed, treatment of vascular risk factors may allow control of symptoms and result in a decreased need for surgeries that destroy the balance function in order to control the spell" said Foster. "If attacks are controlled, the previously inevitable progression to severe hearing loss may be preventable in some cases."
Foster explains that these attacks can be caused by a combination of two factors: 1) a malformation of the inner ear, endolymphatic hydrops (the inner ear dilated with fluid) and 2) risk factors for vascular disease in the brain, such as migraine, sleep apnea, smoking and atherosclerosis.
The researchers propose that a fluid buildup in part of the inner ear, which is strongly associated with Meniere attacks, indicates the presence of a pressure-regulation problem that acts to cause mild, intermittent decreases of blood flow within the ear. When this is combined with vascular diseases that also lower blood flow to the brain and ear, sudden loss of blood flow similar to transient ischemic attacks (or mini strokes) in the brain can be generated in the inner ear sensory tissues. In young people who have hydrops without vascular disorders, no attacks occur because blood flow continues in spite of these fluctuations. However, in people with vascular diseases, these fluctuations are sufficient to rob the ear of blood flow and the nutrients the blood provides. When the tissues that sense hearing and motion are starved of blood, they stop sending signals to the brain, which sets off the vertigo, tinnitus and hearing loss in the disorder.
Restoration of blood flow does not resolve the problem. Scientists believe it triggers a damaging after-effect called the ischemia-reperfusion pathway in the excitable tissues of the ear that silences the ear for several hours, resulting in the prolonged severe vertigo and hearing loss that is characteristic of the disorder. Although most of the tissues recover, each spell results in small areas of damage that over time results in permanent loss of both hearing and balance function in the ear.
Since the first linkage of endolymphatic hydrops and Meniere’s disease in 1938, a variety of mechanisms have been proposed to explain the attacks and the progressive deafness, but no answer has explained all aspects of the disorder, and no treatment based on these theories has proven capable of controlling the progression of the disease. This new theory, if proven, would provide many new avenues of treatment for this previously poorly-controlled disorder.
(Source: eurekalert.org)

Listen to this: Research upends understanding of how humans perceive sound
A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.
The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.
“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”
The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.
Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.
The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.
Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.
“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.
The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.
“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”
Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.
As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.
“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”
Researchers create the inner ear from stem cells, opening potential for new treatments
Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.
A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.
Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.
The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.
"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.
"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."
Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.
Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.
However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Hearing loss from loud blasts may be treatable
Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.
Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.
“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)
“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.
The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.
More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.
The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.
Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.
Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.
In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.
After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.
“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.
“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”
Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.
“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”
Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.
“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health.
The findings appear in the June 11, 2013 online edition of PLoS
Biology. The senior author of this study is Gregory I. Frolenkov, an associate professor in the College of Medicine at the University of Kentucky, Lexington, and his fellow, Artur A. Indzhykulian, Ph.D., is the lead author.
Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain, where it is interpreted as sound.
The findings build on a number of recent discoveries in laboratories at the NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins—cadherin-23 (CDH23) and protocadherin-15 (PCDH15)—that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.
Tip links break easily with exposure to noise. But unlike hair cells, which can’t regenerate in humans, tip links repair themselves, mostly within a matter of hours. The breaking of tip links, and their regeneration, has been known for many years, and is seen as one of the causes of the temporary hearing loss you might experience after a loud blast of sound (or a loud concert). Once the tip links regenerate, hair cell function returns, usually to normal levels. What scientists didn’t know was how the tip link reassembled.
To study tip link assembly, the researchers treated young, postnatal (5-7 days) mouse sensory hair cells with BAPTA—a substance that, like loud noise, damages and disrupts tip links. To image the proteins, the group pioneered an improved scanning electron microscopy (SEM) technique of immunogold labeling that uses antibodies bound to gold particles that attach to the proteins. Then, using SEM, they imaged the cells at high resolution to determine the positions of the proteins before, during, and after BAPTA treatment.
What the researchers found was that after a tip link is chemically disrupted, a new tip link forms, but instead of the normal combination of CDH23 and PCDH15, the link is made up of PCDH15 proteins at both ends. Over the next 24 hours, the PCDH15 protein at the upper end is replaced by CDH23 and the tip link is back to normal.
Why tip links regenerate using a two-step instead of a neat one-step process is not known. For reasons that are still unclear, CDH23 disappears from stereocilia after noise damage while PDCH15 stays around. Looking to regenerate quickly, the lower PDCH15 latches onto another PDCH15, forming a shorter and functionally slightly weaker tip link. Later, at some time during the 36 hours after the damage, when CDH23 returns, PDCH15 gives up its provisional partner and latches onto its much stronger mate in CDH23. In other words, PDCH15 prefers to be with CDH23, but in a pinch it will bond weakly with another bit of PDCH15 until CDH23 shows up.
The researchers coupled the SEM observations with electrophysiology studies to show how the functional properties of the tip links changed throughout this two-step process. The temporary PCDH15/PCDH15 tip link has a slightly different functional response than the permanent PDCH15/CDH23 combination. Researchers were able to correlate the differences in function with the protein combinations that make up the tip link.
Additional experiments revealed that when hair cells develop, the tip links use the same two-step process.
Previous research has shown that both CDH23 and PCDH15 are required for normal hearing and vision. In fact, NIDCD scientists in earlier studies have shown that mutations in either of these genes can cause the hearing loss or deaf-blindness found in Usher Syndrome types 1D and 1F.
“In the case of deaf individuals who are unable to make functional CDH23, knowledge of this new temporary alliance of PCDH15 proteins to form a weaker, but still functional, tip link could inform treatments that would encourage the double PCDH15 bond to become permanent and maintain at least limited hearing,” said Tom Friedman, Ph.D., chief of the Laboratory of Molecular Genetics at the NIDCD, where the research began.