Posts tagged hair cells

Posts tagged hair cells
(Image caption: The hair cells of mice missing just Hey2 are neatly lined up in four rows (left) while those missing Hey1 and Hey2 are disorganized (right). The cells’ hairlike protrusions (pink) can be misoriented, too. Credit: Angelika Doetzlhofer)
Hey1 and Hey2 ensure inner ear ‘hair cells’ are made at the right time, in the right place
Two Johns Hopkins neuroscientists have discovered the “molecular brakes” that time the generation of important cells in the inner ear cochleas of mice. These “hair cells” translate sound waves into electrical signals that are carried to the brain and are interpreted as sounds. If the arrangement of the cells is disordered, hearing is impaired.
A summary of the research will be published in The Journal of Neuroscience on Sept. 16.
"The proteins Hey1 and Hey2 act as brakes to prevent hair cell generation until the time is right," says Angelika Doetzlhofer, Ph.D., an assistant professor of neuroscience. "Without them, the hair cells end up disorganized and dysfunctional."
The cochlea is a coiled, fluid-filled structure bordered by a flexible membrane that vibrates when sound waves hit it. This vibration is passed through the fluid in the cochlea and sensed by specialized hair cells that line the tissue in four precise rows. Their name comes from the cells’ hairlike protrusions that detect movement of the cochlear fluid and create electrical signals that relay the sound to the brain.
During development, “parent cells” within the cochlea gradually differentiate into hair cells in a precise sequence, starting with the cells at the base of the cochlea and progressing toward its tip. The signaling protein Sonic Hedgehog was known to be released by nearby nerve cells in a time- and space-dependent pattern that matches that of hair cell differentiation. But the mechanism of Sonic Hedgehog’s action was unclear.
Doetzlhofer and postdoctoral fellow Ana Benito Gonzalez bred mice whose inner ear cells were missing Hey1 and Hey2, two genes known to be active in the parent cells but turned off in hair cells. They found that, without those genes, the cells were generated too early and were abnormally patterned: Rows of hair cells were either too many or too few, and their hairlike protrusions were often deformed and pointing in the wrong direction.
"While these mice didn’t live long enough for us to test their hearing, we know from other studies that mice with disorganized hair cell patterns have serious hearing problems," says Doetzlhofer.
Further experiments demonstrated the role of Sonic Hedgehog in regulating the two key genes.
"Hey1 and Hey2 stop the parent cells from turning into hair cells until the time is right," explains Doetzlhofer. "Sonic Hedgehog applies those ‘brakes,’ then slowly releases pressure on them as the cochlea develops. If the brakes stop working, the hair cells are generated too early and end up misaligned."
She adds that Sonic Hedgehog, Hey1 and Hey2 are found in many other parent cell types throughout the developing nervous system and may play similar roles in timing the generation of other cell types.
Noise-Induced Hearing Loss Alters Brain Responses to Speech
Prolonged exposure to loud noise alters how the brain processes speech, potentially increasing the difficulty in distinguishing speech sounds, according to neuroscientists at The University of Texas at Dallas.
In a paper published this week in Ear and Hearing, researchers demonstrated for the first time how noise-induced hearing loss affects the brain’s recognition of speech sounds.
Noise-induced hearing loss (NIHL) reaches all corners of the population, affecting an estimated 15 percent of Americans between the ages of 20 and 69, according to the National Institute of Deafness and Other Communication Disorders (NIDCD).
Exposure to intensely loud sounds leads to permanent damage of the hair cells, which act as sound receivers in the ear. Once damaged, the hair cells do not grow back, leading to NIHL.
“As we have made machines and electronic devices more powerful, the potential to cause permanent damage has grown tremendously,” said Dr. Michael Kilgard, co-author and Margaret Fonde Jonsson Professor in the School of Behavioral and Brain Sciences. “Even the smaller MP3 players can reach volume levels that are highly damaging to the ear in a matter of minutes.”
Before the study, scientists had not clearly understood the direct effects of NIHL on how the brain responds to speech.
To simulate two types of noise trauma that clinical populations face, UT Dallas scientists exposed rats to moderate or intense levels of noise for an hour. One group heard a high-frequency noise at 115 decibels inducing moderate hearing loss, and a second group heard a low-frequency noise at 124 decibels causing severe hearing loss.
For comparison, the American Speech-Language-Hearing Association lists the maximum output of an MP3 player or the sound of a chain saw at about 110 decibels and the siren on an emergency vehicle at 120 decibels. Regular exposure to sounds greater than 100 decibels for more than a minute at a time may lead to permanent hearing loss, according to the NIDCD.
Researchers observed how the two types of hearing loss affected speech sound processing in the rats by recording the neuronal response in the auditory cortex a month after the noise exposure. The auditory cortex, one of the main areas that processes sounds in the brain, is organized on a scale, like a piano. Neurons at one end of the cortex respond to low-frequency sounds, while other neurons at the opposite end react to higher frequencies.
In the group with severe hearing loss, less than one-third of the tested auditory cortex sites that normally respond to sound reacted to stimulation. In the sites that did respond, there were unusual patterns of activity. The neurons reacted slower, the sounds had to be louder and the neurons responded to frequency ranges narrower than normal. Additionally, the rats could not tell the speech sounds apart in a behavioral task they could successfully complete before the hearing loss.
In the group with moderate hearing loss, the area of the cortex responding to sounds didn’t change, but the neurons’ reaction did. A larger area of the auditory cortex responded to low-frequency sounds. Neurons reacting to high frequencies needed more intense sound stimulation and responded slower than those in normal hearing animals. Despite these changes, the rats were still able to discriminate the speech sounds in a behavioral task.
“Although the ear is critical to hearing, it is just the first step of many processing stages needed to hold a conversation,” Kilgard said. “We are beginning to understand how hearing damage alters the brain and makes it hard to process speech, especially in noisy environments.”
Hearing protein required to convert sound into brain signals
A specific protein found in the bridge-like structures that make up part of the auditory machinery of the inner ear is essential for hearing. The absence of this protein or impairment of the gene that codes for this protein leads to profound deafness in mice and humans, respectively, reports a team of researchers in the journal EMBO Molecular Medicine.
“The goal of our study was to identify which isoform of protocadherin-15 forms the tip-links, the essential connections of the auditory mechanotransduction machinery within mature hair cells that are needed to convert sound into electrical signals,” remarks Christine Petit, the lead author of the study and Professor at the Institut Pasteur in Paris and at Collège de France.
Three types of protocadherin-15 are known to exist in auditory sensory cells of the inner ear but it was not clear which of these protein isoforms was essential for hearing. “Our work pinpoints the CD2 isoform of protocadherin-15 as an essential component of the tip-link and reveals that the absence of protocadherin-15 CD2 in mouse hair cells results in profound deafness.”
Within the hair bundle, the sensory antenna of auditory sensory cells, the tip-link is a bridge-like structure that when stretched can activate the ion channel responsible for generating electrical signals from sound. Tension in the tip-link created by sound stimulation opens this channel of unknown molecular composition thus generating electrical signals and, ultimately, the perception of sound.
The researchers engineered mice that lack only the CD2 isoform of protocadherin-15 exclusively during adulthood. While the absence of this isoform led to profound deafness, the lack of the other protocadherin-15 isoforms in mice did not affect their hearing.
Patients who carry a mutation in the gene encoding protocadherin-15 are affected by a rare devastating disorder, Usher syndrome, which is characterized by profound deafness, balance problems and gradual visual loss due to retinitis pigmentosa. In a separate approach, the scientists also sequenced the genes of 60 patients who had profound deafness without balance and visual impairment. Three of these patients were shown to have mutations specifically affecting protocadherin-15 CD2. “The demonstration of a requirement for protocadherin-15 CD2 for hearing not only in mice but also in humans constitutes a major step in the objective of deciphering the components of the auditory mechanotransduction machinery. This isoform can be used as a starting point to identify the other components of the auditory machinery. By focusing our attention on the CD2 isoform of protocadherin-15, we can now consider developing gene therapy strategies for deafness caused by defects in this gene,” says EMBO Member Christine Petit.
A precise rhythm of electrical impulses transmitted from cells in the inner ear coaches the brain how to hear, according to a new study led by researchers at the University of Pittsburgh School of Medicine. They report the first evidence of this developmental process today in the online version of Neuron.

The ear generates spontaneous electrical activity to trigger a response in the brain before hearing actually begins, said senior investigator Karl Kandler, Ph.D., professor of otolaryngology and neurobiology, Pitt School of Medicine. These patterned bursts start at inner hair cells in the cochlea, which is part of the inner ear, and travel along the auditory nerve to the brain.
"It’s long been speculated that these impulses are intended to ‘wire’ the brain auditory centers," he said. "Until now, however, no one has been able to provide experimental evidence to support this concept."
To map neural connectivity, Dr. Kandler’s team prepared sections of a mouse brain containing the auditory pathways in a chemical that is inert until UV light hits it. Then, they pulsed laser light at a neuron, making the chemical active, which excites the nerve cells to generate an electrical impulse. They then tracked the spread of the impulse to adjacent cells, allowing them to map the network a neuron at a time.
All mice are born unable to hear, a sense that develops around two weeks after birth. But even before hearing starts, the ear produces rhythmic bursts of electrical activity which causes a broad reaction in the brain’s auditory processing centers. As the beat goes on, the brain organizes itself, pruning unneeded connections and strengthening others. To investigate whether the beat is indeed important for this reorganization, the team used genetically engineered mice that lack a key receptor on the inner hair cells which causes them to change their beat.
"In normal mice, the wiring diagram of the brain gets sharper and more efficient over time and they begin to hear," Dr. Kandler said. "But this doesn’t happen when the inner ear beats in a different rhythm, which means the brain isn’t getting the instructions it needs to wire itself correctly. We have evidence that these mice can detect sound, but they have problems perceiving the pitch of sounds."
In humans, such subtle hearing deficits are associated with Central Auditory-Processing Disorders (CAPD), difficulty processing the meaning of sound. About 2 to 3 percent of children are affected with CAPD and these children often have speech and language disorders or delays, and learning disabilities such as dyslexia. In contrast to causes of hearing impairments due to ear deficits, the causes underlying CAPD have remained obscure.
"Our findings suggest that an abnormal rhythm of electrical impulses early in life may be an important contributing factor in the development of CAPD. More research is needed to find out whether this also holds true for humans, but our results point to a new direction that is worth following up," Dr. Kandler said.
(Source: eurekalert.org)

Listen to this: Research upends understanding of how humans perceive sound
A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.
The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.
“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”
The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.
Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.
The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.
Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.
“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.
The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.
“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”
Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.
As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.
“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”
Single tone alerts brain to complete sound pattern
The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team from Utrecht and Nijmegen published an article on the subject in the journal PNAS on 2 September.
We hear with our brain. The cochlea picks up sound vibrations but the signals produced as a result are processed by the brain, using known patterns. If, for example, you briefly hear a weak tone, your hearing focuses on that tone and suppresses any frequencies around it. This makes it easier to notice any relevant sounds in your surroundings. The present research has shown that this ‘auditory attention filter’ is much more complex than believed until now: frequencies that have an octave relationship with the target tone are also heard better.
John van Opstal, professor of Biophysics at Radboud University: ‘This test proves that the brain prepares for a more extensive pattern of tones, even if the person just hears a single test tone or if he has a tone in mind. These extra tones in the pattern were not sounded during the experiment, but the brain complements the information received from the cochlea. This is scientifically interesting. Audiology, for example, at present places great emphasis on the cochlea.’
Octave relationship
The subjects undergoing the experiment did not have an easy time. For an hour they listened to unstructured noise containing very soft tones that they had to detect. Every few seconds they were presented with a tone of 1000 Hz, the cue. Then during one of two time intervals, a very quiet, short second tone was sounded. The subject had to indicate in which of the two intervals they had heard the second tone. It became apparent that tones having an octave relationship with the cue were all heard better, and those around the cue were heard less well. An octave is a well-known term in music, indicating the distance between two tones, the frequencies of which have a 2-to-1 relationship.
Voice
Van Opstal: ‘We wanted to gather data on the auditory attention filter around the target tone. When we made the range larger than other researchers had done previously, more peaks suddenly appeared. This was a complete surprise to us. One possible explanation could be that the hearing system has evolved in order to hear sounds made by members of an animal’s own species (voices in the case of humans) in noisy surroundings. Vocalisations always consist of harmonic complexes of several simultaneous tones having an octave relationship with each other.’
Hearing aid
The researchers, who work at Utrecht University, the UMC Utrecht Brain Center and Radboud University Nijmegen, can easily think up applications for this fundamental research. If, for example, someone no longer hears high tones because of damage to the cochlear hair cells, the hearing aid can be adjusted in such a way that it converts those tones so they sound one or more octaves lower. Since the brain itself ‘fills in’ tones with an octave relationship, that person’s perception should then become more normal. It is also important for commercial sound producers to know how tones are perceived. That is why Philips Research is involved in this research in their department ‘Brain, Body and Behavior’.
Researchers create the inner ear from stem cells, opening potential for new treatments
Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.
A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.
Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.
The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.
"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.
"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."
Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.
Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.
However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Hearing loss from loud blasts may be treatable
Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.
Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.
“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)
“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.
The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.
More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.
The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.
Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.
Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.
In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.
After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.
“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.
“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”
Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.
“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”
Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.
“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health.
The findings appear in the June 11, 2013 online edition of PLoS
Biology. The senior author of this study is Gregory I. Frolenkov, an associate professor in the College of Medicine at the University of Kentucky, Lexington, and his fellow, Artur A. Indzhykulian, Ph.D., is the lead author.
Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain, where it is interpreted as sound.
The findings build on a number of recent discoveries in laboratories at the NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins—cadherin-23 (CDH23) and protocadherin-15 (PCDH15)—that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.
Tip links break easily with exposure to noise. But unlike hair cells, which can’t regenerate in humans, tip links repair themselves, mostly within a matter of hours. The breaking of tip links, and their regeneration, has been known for many years, and is seen as one of the causes of the temporary hearing loss you might experience after a loud blast of sound (or a loud concert). Once the tip links regenerate, hair cell function returns, usually to normal levels. What scientists didn’t know was how the tip link reassembled.
To study tip link assembly, the researchers treated young, postnatal (5-7 days) mouse sensory hair cells with BAPTA—a substance that, like loud noise, damages and disrupts tip links. To image the proteins, the group pioneered an improved scanning electron microscopy (SEM) technique of immunogold labeling that uses antibodies bound to gold particles that attach to the proteins. Then, using SEM, they imaged the cells at high resolution to determine the positions of the proteins before, during, and after BAPTA treatment.
What the researchers found was that after a tip link is chemically disrupted, a new tip link forms, but instead of the normal combination of CDH23 and PCDH15, the link is made up of PCDH15 proteins at both ends. Over the next 24 hours, the PCDH15 protein at the upper end is replaced by CDH23 and the tip link is back to normal.
Why tip links regenerate using a two-step instead of a neat one-step process is not known. For reasons that are still unclear, CDH23 disappears from stereocilia after noise damage while PDCH15 stays around. Looking to regenerate quickly, the lower PDCH15 latches onto another PDCH15, forming a shorter and functionally slightly weaker tip link. Later, at some time during the 36 hours after the damage, when CDH23 returns, PDCH15 gives up its provisional partner and latches onto its much stronger mate in CDH23. In other words, PDCH15 prefers to be with CDH23, but in a pinch it will bond weakly with another bit of PDCH15 until CDH23 shows up.
The researchers coupled the SEM observations with electrophysiology studies to show how the functional properties of the tip links changed throughout this two-step process. The temporary PCDH15/PCDH15 tip link has a slightly different functional response than the permanent PDCH15/CDH23 combination. Researchers were able to correlate the differences in function with the protein combinations that make up the tip link.
Additional experiments revealed that when hair cells develop, the tip links use the same two-step process.
Previous research has shown that both CDH23 and PCDH15 are required for normal hearing and vision. In fact, NIDCD scientists in earlier studies have shown that mutations in either of these genes can cause the hearing loss or deaf-blindness found in Usher Syndrome types 1D and 1F.
“In the case of deaf individuals who are unable to make functional CDH23, knowledge of this new temporary alliance of PCDH15 proteins to form a weaker, but still functional, tip link could inform treatments that would encourage the double PCDH15 bond to become permanent and maintain at least limited hearing,” said Tom Friedman, Ph.D., chief of the Laboratory of Molecular Genetics at the NIDCD, where the research began.
Improved Hearing Anticipated for Implant Recipients
The cochlear implant is widely considered to be the most successful neural prosthetic on the market. The implant, which helps deaf individuals perceive sound, translates auditory information into electrical signals that go directly to the brain, bypassing cells that don’t serve this function as they should because they are damaged.
According to the National Institute on Deafness and Other Communication Disorders, approximately 188,000 people worldwide have received cochlear implants since these devices were introduced in the early 1980s, including roughly 41,500 adults and 25,500 children in the United States.
Despite their prevalence, cochlear implants have a long way to go before their performance is comparable to that of the intact human ear. Led by Pamela Bhatti, Ph.D., a team of researchers at the Georgia Institute of Technology has developed a new type of interface between the device and the brain that could dramatically improve the sound quality of the next generation of implants.
A normal ear processes sound the way a Rube Goldberg machine flips a light switch — via a perfectly-timed chain reaction involving a number of pieces and parts. First, sound travels down the canal of the outer ear, striking the eardrum and causing it to vibrate. The vibration of the eardrum causes small bones in the middle ear to vibrate, which in turn, creates movement in the fluid of the inner ear, or cochlea. This causes movement in tiny structures called hair cells, which translate the movement into electrical signals that travel to the brain via the auditory nerve.
Dysfunctional hair cells are the most common culprit in a type of hearing loss called sensorineural deafness, named for the resulting breakdown in communication between the ear and the brain. Sometimes the hair cells don’t function properly from birth, but severe trauma or a bad infection can cause irreparable damage to these delicate structures as well.
Contemporary cochlear implants
Traditional hearing aids, which work by amplifying sound, rely on the presence of some functioning hair cells. A cochlear implant, on the other hand, bypasses the hair cells completely. Rather than restoring function, it works by translating sound vibrations captured by a microphone outside the ear into electrical signals. These signals are transmitted to the brain by the auditory nerve, which interprets them as sound.
Cochlear implants are only recommended for individuals with severe to profound sensorineural hearing loss, meaning those who aren’t able to hear sounds below 70 decibels. (Conversational speech typically occurs between 20 and 60 decibels.)
The device itself consists of an external component that attaches via a magnetic disk to an internal component, implanted under the skin behind the ear. The external component detects sounds and selectively amplifies speech. The internal component converts this information into electrical impulses, which are sent to a bundle of thin wire electrodes threaded through the cochlea.
Improving the interface
As an electrical engineer, Bhatti sees the current electrode configuration as a significant barrier to clear sound transmission in the current device.
"In an intact ear, the hair cells are plentiful, and are in close contact with the nerves that transmit sound information to the brain," says Bhatti. "The challenge with the implant is getting efficient coupling between the electrodes and the nerves."
Contemporary implants contain between 12 and 22 wire electrodes, each of which conveys a signal for a different pitch. The idea is the more electrodes, the clearer the message.
So why not add more wire electrodes to the current design and call it a day?
Much like house-hunting in New York City, the problem comes down to a serious lack of available real estate. At its widest, the cochlea is 2 millimeters in diameter, or about the thickness of a nickel. As it coils, it tapers down to a mere 200 micrometers, about the width of a human hair.
"While we’d like to be able to increase the number of electrodes, the space issue is a major challenge from an engineering perspective," says Bhatti.
With funding from the National Science Foundation, Bhatti and her team have developed a new, thin-film, electrode array that is up to three times more sensitive than traditional wire electrodes, without adding bulk.
Unlike wire electrodes, the new array is also flexible, meaning it can get closer to the inner wall of the cochlea. The researchers believe this will create better coupling between the array and the nervous system, leading to a crisper signal.
According to Bhatti, one of the biggest challenges is actually implanting the device into the spiral-shaped cochlea:
"We could have created the best array in the world, but it wouldn’t have mattered if the surgeon couldn’t get it in the right spot," says Bhatti.
To combat this problem, the team has invented an insertion device that protects the array and serves as a guide for surgeons to ensure proper placement.
Before it’s approved for use in humans, it will need to undergo rigorous testing to ensure that it is both safe and effective; however, Bhatti is already thinking about what’s next. She envisions that one day, the electrodes won’t need to be attached to an array at all. Instead, they will be anchored directly to the cochlea with a biocompatible material that will allow them to more seamlessly integrate with the brain.
The most important thing, according to Bhatti, is not to lose sight of the big picture.
"We are always designing with the end-user in mind," says Bhatti. "The human component is the most important one to consider when we translate science into practice."