Neuroscience

Articles and news from the latest research reports.

Posts tagged deafness

181 notes

Music to your ears?

Many people listen to loud music without realizing that this can affect their hearing. This could lead to difficulties in understanding speech during age-related hearing loss which affects up to half of people over the age of 65.

image

New research led by the University of Leicester has examined the cellular mechanisms that underlie hearing loss and tinnitus triggered by exposure to loud sound.

It has demonstrated that physical changes in myelin itself -the coating of the auditory nerve carrying sound signals to the brain – affect our ability to hear.

Dr Martine Hamann, Lecturer in Neurosciences at the University of Leicester, said: “People who suffer from hearing loss have difficulties in understanding speech, particularly when the environment is noisy and when other people are talking nearby.

“Understanding speech relies on fast transmission of auditory signals. Therefore it is important to understand how the speed of signal transmission gets decreased during hearing loss. Understanding these underlying phenomena means that it could be possible to find  medicines to improve auditory perception, specifically in noisy backgrounds.”

The research, funded by Action on Hearing Loss, and led by Leicester, was done in collaboration with Dr Angus Brown of the University of Nottingham. The research, Computational modelling of the effects of auditory nerve dysmyelination is published in Frontiers in Neuroanatomy.

Dr Ralph Holme, Head of Biomedical Research at Action on Hearing Loss, the only UK charity dedicated to funding research into hearing loss said: “There is an urgent need for effective treatments to prevent hearing loss - a condition that affects 10 million people in the UK and all too often isolates people from friends and family. This research further increases our understanding of the biological consequences of exposure to loud noise. Knowledge that we hope will lead to effective treatments for hearing loss within a generation.”

In previous research, researchers have shown that after exposure to loud sounds leading to hearing loss, the myelin coat surrounding the auditory nerve becomes thinner. An important property of auditory signal transmission consists of electrical signals “jumping” from one myelin domain to the other. Those domains, called Nodes of Ranvier, become elongated after exposure to loud sound.

Dr Hamann said: “Although we showed that transmission of auditory signals (electrical signals transmitted along the auditory nerve) was slowed down after exposure to loud sound leading to hearing loss, the question remained: Is this due to the actual change of the physical properties of the myelin or is it due to the redistribution of channels occurring subsequent to those changes?

“This work is a theoretical work whereby we tested the hypothesis that myelin was the prime reason for the decreased signal transmission. We simulated how physical changes to the myelin and/or redistribution of channels influenced the signal transmission along the auditory nerve. We found that the redistribution of channels had only small effect on the conduction velocity whereas physical changes to myelin were primarily responsible for the effects.”

The research has shown for the first time the closer links between a deficit in the “myelin” sheath surrounding the auditory nerve and hearing loss. “This research is innovative because data modelling (simulations) was used on previous morphological data and assessed that physical changes to the myelin coat were the principal cause of the deficit,” said Dr Hamman.

“We have come closer to understanding the reasons behind deficits in auditory perception. This means that we can also get closer to target those deficits, for example by promoting myelin repair after acoustic trauma or during age related hearing loss.”

Dr Hamann said the work will help prevention as well as progression into finding appropriate cures for hearing loss and possibly tinnitus developing from hearing loss.

“The sense of achievement comes from the fact that it could help ageing people to better understand their relatives on the phone,” said Dr Hamann.

The next step is to test drugs that could promote myelin repair and improve hearing after hearing loss.

(Source: www2.le.ac.uk)

Filed under hearing loss deafness myelin sheath auditory nerve aging neuroscience science

106 notes

Researchers identify pattern of cognitive risks in some children with cochlear implants

Children with profound deafness who receive a cochlear implant had as much as five times the risk of having delays in areas of working memory, controlled attention, planning and conceptual learning as children with normal hearing, according to Indiana University research published May 22 in the Journal of the American Medical Association Otolaryngology—Head and Neck Surgery.

image

The authors evaluated 73 children implanted before age 7 and 78 children with normal hearing to determine the risk of deficits in executive functioning behaviors in everyday life.

Executive functioning, a set of mental processes involved in regulating and directing thinking and behavior, is important for focusing and attaining goals in daily life. All children in the study had average to above-average IQ scores. The results, reported in “Neurocognitive Risk in Children With Cochlear Implants,” are the first from a large-scale study to compare real-world executive functioning behavior in children with cochlear implants and those with normal hearing.

A cochlear implant device consists of an external component that processes sound into electrical signals that are sent to an internal receiver and electrodes that stimulate the auditory nerve. Although the device restores the ability to perceive many sounds to children who are born deaf, some details and nuances of hearing are lost in the process.

First author William Kronenberger, Ph.D., professor of clinical psychology in psychiatry at the IU School of Medicine and a specialist in neurocognitive and executive function testing, said that delays in executive functioning have been commonly reported by parents and others who work with children with cochlear implants. Based on these observations, his group sought to evaluate whether elevated risks of delays in executive functioning in children with cochlear implants exist, and what components of executive functioning were affected.

"In this study, about one-third to one-half of children with cochlear implants were found to be at-risk for delays in areas of parent-rated executive functioning such as concept formation, memory, controlled attention and planning. This rate was 2 to 5 times greater than that seen in normal-hearing children," reported Dr. Kronenberger, who also is co-chief of the ADHD-Disruptive Behavior Disorders Clinic and directs the psychology testing clinic at Riley Hospital for Children at IU Health.

"This is really innovative work," said co-author David B. Pisoni, Ph.D., director of the Speech Research Laboratory in the IU Department of Psychological and Brain Sciences. "Almost no one has looked at these issues in these children. Most audiologists, neuro-otologists, surgeons and speech-language pathologists — the people who work in this field — focus on the hearing deficit as a medical condition and have been less focused on the important discoveries in developmental science and cognitive neuroscience." Dr. Pisoni also is a Chancellors’ Professor of Psychological and Brain Sciences at IU Bloomington.

Richard Miyamoto, M.D., chair of the IU School of Medicine Department of Otolaryngology-Head and Neck Surgery and a pioneer in the field of cochlear implantation in children and adults, said this finding augments other research on interventions to help children with cochlear implants perform at a level similar to children without hearing deficits.

"The ultimate goal of our department’s research with cochlear implants has always been to influence higher-level neurocognitive functioning," Dr. Miyamoto said. "Much of the success we have seen to date clearly relates to the brain’s ability to process an incomplete signal. The current research will further assist in identifying gaps in our knowledge."

One possible answer may lie in earlier implantation, Dr. Miyamoto said. The age at which children are implanted has been steadily decreasing, which has produced significant improvement in spoken language outcomes. Research shows the early implantation is related to better outcomes in speech and understanding, and it is reasonable to believe that there may be less of a deficit in executive functioning with earlier implantation, said Dr. Miyamoto, who is the Arilla Spence DeVault Professor of Otolaryngology-Head and Neck Surgery and medical director of audiology and speech language pathology at the IU School of Medicine.

Preschoolers in the IU study were implanted at an average age of 18 months, and they had fewer executive function delays than school-age children who were implanted 10 months later, at an average age of 28 months. 

Children in the study were divided into two age groups: preschool (3 to 5 years) and school-age (7 to17 years). Using an established rating scale, parents rated executive function in everyday life for children with cochlear implants and for the control group with normal hearing.

"We compared parent ratings and looked at the percentage of children in each group who scored above a cut-off value that indicates at least a mild delay in executive functioning," Dr. Kronenberger said. "In the critical areas of controlled attention, working memory, planning and solving new problems, about 30 to 45 percent of the children with cochlear implants scored above the cut-off value, compared to about 15 percent or less of the children in the normal-hearing sample."

Dr. Kronenberger said the research also shows that many children develop average or better executive functioning skills after cochlear implantation.

"These results show that half or more of our group with cochlear implants did not have significant delays in executive functioning," Dr. Kronenberger said. "Cochlear implants produce remarkable gains in spoken language and other neurocognitive skills, but there is a certain amount of learning and catch-up that needs to take place with children who have experienced a hearing loss prior to cochlear implantation. So far, most of the interventions to help with this learning have focused on speech and language. Our findings show a need to identify and help some children in certain domains of executive functioning as well."

"We are now looking for early markers in children who are at risk before they get implants," Dr. Pisoni said. "It will be beneficial to identify as early as possible which children might be at risk for poor outcomes, and we need to understand the variability in the outcome and what can be done about it."

(Source: news.medicine.iu.edu)

Filed under cochlear implant deafness hearing loss working memory cognition children psychology neuroscience science

134 notes

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned

In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.

While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.

“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).

Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.

“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.

The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.

“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”

(Image: iStockphoto)

Filed under brain structure language hearing auditory cortex deafness neuroscience science

68 notes

Improved Hearing Anticipated for Implant Recipients
The cochlear implant is widely considered to be the most successful neural prosthetic on the market. The implant, which helps deaf individuals perceive sound, translates auditory information into electrical signals that go directly to the brain, bypassing cells that don’t serve this function as they should because they are damaged.
According to the National Institute on Deafness and Other Communication Disorders, approximately 188,000 people worldwide have received cochlear implants since these devices were introduced in the early 1980s, including roughly 41,500 adults and 25,500 children in the United States.
Despite their prevalence, cochlear implants have a long way to go before their performance is comparable to that of the intact human ear. Led by Pamela Bhatti, Ph.D., a team of researchers at the Georgia Institute of Technology has developed a new type of interface between the device and the brain that could dramatically improve the sound quality of the next generation of implants.
A normal ear processes sound the way a Rube Goldberg machine flips a light switch — via a perfectly-timed chain reaction involving a number of pieces and parts. First, sound travels down the canal of the outer ear, striking the eardrum and causing it to vibrate. The vibration of the eardrum causes small bones in the middle ear to vibrate, which in turn, creates movement in the fluid of the inner ear, or cochlea. This causes movement in tiny structures called hair cells, which translate the movement into electrical signals that travel to the brain via the auditory nerve.
Dysfunctional hair cells are the most common culprit in a type of hearing loss called sensorineural deafness, named for the resulting breakdown in communication between the ear and the brain. Sometimes the hair cells don’t function properly from birth, but severe trauma or a bad infection can cause irreparable damage to these delicate structures as well.
Contemporary cochlear implants
Traditional hearing aids, which work by amplifying sound, rely on the presence of some functioning hair cells. A cochlear implant, on the other hand, bypasses the hair cells completely. Rather than restoring function, it works by translating sound vibrations captured by a microphone outside the ear into electrical signals. These signals are transmitted to the brain by the auditory nerve, which interprets them as sound.
Cochlear implants are only recommended for individuals with severe to profound sensorineural hearing loss, meaning those who aren’t able to hear sounds below 70 decibels. (Conversational speech typically occurs between 20 and 60 decibels.)
The device itself consists of an external component that attaches via a magnetic disk to an internal component, implanted under the skin behind the ear. The external component detects sounds and selectively amplifies speech. The internal component converts this information into electrical impulses, which are sent to a bundle of thin wire electrodes threaded through the cochlea.
Improving the interface 
As an electrical engineer, Bhatti sees the current electrode configuration as a significant barrier to clear sound transmission in the current device.
"In an intact ear, the hair cells are plentiful, and are in close contact with the nerves that transmit sound information to the brain," says Bhatti. "The challenge with the implant is getting efficient coupling between the electrodes and the nerves."
Contemporary implants contain between 12 and 22 wire electrodes, each of which conveys a signal for a different pitch. The idea is the more electrodes, the clearer the message.
So why not add more wire electrodes to the current design and call it a day?
Much like house-hunting in New York City, the problem comes down to a serious lack of available real estate. At its widest, the cochlea is 2 millimeters in diameter, or about the thickness of a nickel. As it coils, it tapers down to a mere 200 micrometers, about the width of a human hair.
"While we’d like to be able to increase the number of electrodes, the space issue is a major challenge from an engineering perspective," says Bhatti.
With funding from the National Science Foundation, Bhatti and her team have developed a new, thin-film, electrode array that is up to three times more sensitive than traditional wire electrodes, without adding bulk.
Unlike wire electrodes, the new array is also flexible, meaning it can get closer to the inner wall of the cochlea. The researchers believe this will create better coupling between the array and the nervous system, leading to a crisper signal.
According to Bhatti, one of the biggest challenges is actually implanting the device into the spiral-shaped cochlea:
"We could have created the best array in the world, but it wouldn’t have mattered if the surgeon couldn’t get it in the right spot," says Bhatti.
To combat this problem, the team has invented an insertion device that protects the array and serves as a guide for surgeons to ensure proper placement.
Before it’s approved for use in humans, it will need to undergo rigorous testing to ensure that it is both safe and effective; however, Bhatti is already thinking about what’s next. She envisions that one day, the electrodes won’t need to be attached to an array at all. Instead, they will be anchored directly to the cochlea with a biocompatible material that will allow them to more seamlessly integrate with the brain.
The most important thing, according to Bhatti, is not to lose sight of the big picture.
"We are always designing with the end-user in mind," says Bhatti. "The human component is the most important one to consider when we translate science into practice."

Improved Hearing Anticipated for Implant Recipients

The cochlear implant is widely considered to be the most successful neural prosthetic on the market. The implant, which helps deaf individuals perceive sound, translates auditory information into electrical signals that go directly to the brain, bypassing cells that don’t serve this function as they should because they are damaged.

According to the National Institute on Deafness and Other Communication Disorders, approximately 188,000 people worldwide have received cochlear implants since these devices were introduced in the early 1980s, including roughly 41,500 adults and 25,500 children in the United States.

Despite their prevalence, cochlear implants have a long way to go before their performance is comparable to that of the intact human ear. Led by Pamela Bhatti, Ph.D., a team of researchers at the Georgia Institute of Technology has developed a new type of interface between the device and the brain that could dramatically improve the sound quality of the next generation of implants.

A normal ear processes sound the way a Rube Goldberg machine flips a light switch — via a perfectly-timed chain reaction involving a number of pieces and parts. First, sound travels down the canal of the outer ear, striking the eardrum and causing it to vibrate. The vibration of the eardrum causes small bones in the middle ear to vibrate, which in turn, creates movement in the fluid of the inner ear, or cochlea. This causes movement in tiny structures called hair cells, which translate the movement into electrical signals that travel to the brain via the auditory nerve.

Dysfunctional hair cells are the most common culprit in a type of hearing loss called sensorineural deafness, named for the resulting breakdown in communication between the ear and the brain. Sometimes the hair cells don’t function properly from birth, but severe trauma or a bad infection can cause irreparable damage to these delicate structures as well.

Contemporary cochlear implants

Traditional hearing aids, which work by amplifying sound, rely on the presence of some functioning hair cells. A cochlear implant, on the other hand, bypasses the hair cells completely. Rather than restoring function, it works by translating sound vibrations captured by a microphone outside the ear into electrical signals. These signals are transmitted to the brain by the auditory nerve, which interprets them as sound.

Cochlear implants are only recommended for individuals with severe to profound sensorineural hearing loss, meaning those who aren’t able to hear sounds below 70 decibels. (Conversational speech typically occurs between 20 and 60 decibels.)

The device itself consists of an external component that attaches via a magnetic disk to an internal component, implanted under the skin behind the ear. The external component detects sounds and selectively amplifies speech. The internal component converts this information into electrical impulses, which are sent to a bundle of thin wire electrodes threaded through the cochlea.

Improving the interface

As an electrical engineer, Bhatti sees the current electrode configuration as a significant barrier to clear sound transmission in the current device.

"In an intact ear, the hair cells are plentiful, and are in close contact with the nerves that transmit sound information to the brain," says Bhatti. "The challenge with the implant is getting efficient coupling between the electrodes and the nerves."

Contemporary implants contain between 12 and 22 wire electrodes, each of which conveys a signal for a different pitch. The idea is the more electrodes, the clearer the message.

So why not add more wire electrodes to the current design and call it a day?

Much like house-hunting in New York City, the problem comes down to a serious lack of available real estate. At its widest, the cochlea is 2 millimeters in diameter, or about the thickness of a nickel. As it coils, it tapers down to a mere 200 micrometers, about the width of a human hair.

"While we’d like to be able to increase the number of electrodes, the space issue is a major challenge from an engineering perspective," says Bhatti.

With funding from the National Science Foundation, Bhatti and her team have developed a new, thin-film, electrode array that is up to three times more sensitive than traditional wire electrodes, without adding bulk.

Unlike wire electrodes, the new array is also flexible, meaning it can get closer to the inner wall of the cochlea. The researchers believe this will create better coupling between the array and the nervous system, leading to a crisper signal.

According to Bhatti, one of the biggest challenges is actually implanting the device into the spiral-shaped cochlea:

"We could have created the best array in the world, but it wouldn’t have mattered if the surgeon couldn’t get it in the right spot," says Bhatti.

To combat this problem, the team has invented an insertion device that protects the array and serves as a guide for surgeons to ensure proper placement.

Before it’s approved for use in humans, it will need to undergo rigorous testing to ensure that it is both safe and effective; however, Bhatti is already thinking about what’s next. She envisions that one day, the electrodes won’t need to be attached to an array at all. Instead, they will be anchored directly to the cochlea with a biocompatible material that will allow them to more seamlessly integrate with the brain.

The most important thing, according to Bhatti, is not to lose sight of the big picture.

"We are always designing with the end-user in mind," says Bhatti. "The human component is the most important one to consider when we translate science into practice."

Filed under cochlear implants prosthetics auditory nerve hair cells deafness neuroscience science

273 notes

How Neuroscience Will Fight Five Age-Old Afflictions
SEIZURES
A device delivers targeted drugs to calm overactive neurons
For years, large clinical trials have treated people with epilepsy using so-called deep-brain stimulation: surgically implanted electrodes that can detect a seizure and stop it with an electrical jolt. The technology leads to a 69 percent reduction in seizures after five years, according to the latest results.
Tracy Cui, a biomedical engineer at the University of Pittsburgh, hopes to improve upon that statistic. Her group has designed an electrode that would deliver both an electrical pulse and antiseizure medication. “We know where we want to apply the drug,” Cui says, “so you would not need a lot of it.”
To build the device, Cui’s team immersed a metal electrode in a solution containing two key ingredients: a molecule called a monomer and the drug CNQX. Zapping the solution with electricity causes the monomers to link together and form a long chain called a polymer. Because the polymer is positively charged, it attracts the negatively charged CNQX, leaving the engineers with their target product: an electrode coated in a film that’s infused with the drug.
The researchers then placed the electrodes in a petri dish with rat neurons. Another zap of electricity disrupted the electrostatic attraction in the film, causing the polymer to release its pharmacological payload—and nearby cells to quiet their erratic firing patterns. Cui says her team has successfully repeated the experiment in living rats. Next, she’d like to test the electrodes in epileptic rats and then begin the long process of regulatory approval for human use.
The body’s blood-brain barrier protects the organ from everything but the smallest molecules, rendering most drugs ineffective. As a result, this drug-​delivery mechanism could treat other brain disorders, Cui says. The electrodes can be loaded with any kind of small drug—like dopamine or painkillers—making it useful for treating Parkinson’s disease, chronic pain, or even drug addiction.
DEMENTIA
Electrode arrays stimulate mental processing
Dementia is one of the most well-known and frustrating brain afflictions. It damages many of the fundamental cognitive functions that make us human: working memory, decision-making, language, and logical reasoning. Alzheimer’s, Huntington’s, and Parkinson’s diseases all lead to dementia, and it’s also sometimes associated with multiple sclerosis, AIDS, and the normal process of aging.
Theodore Berger, a biomedical engineer at the University of Southern California, hopes to help people stave off the symptoms of dementia with a device implanted in the brain’s prefrontal cortex, a region crucial for sophisticated cognition. He and colleagues at Wake Forest Baptist Medical Center tested the device in a study involving five monkeys and a memory game.
First the team implanted an electrode array so that it could record from layers 2/3 and 5 of the prefrontal cortex and stimulate layer 5. The neural signals that jet back and forth between these areas relate to attention and decision-making. The team then trained the monkeys to play a computer game in which they saw a cartoon picture—such as a truck, lion, or paint palette—and had to select the same image from a panel of pictures 90 seconds later.
The scientists initially analyzed the electrical signals sent between the two cortical layers when the monkeys made a correct match. In later experiments, the team caused the array to emit the same signal just before the monkey made its decision. The animals’ accuracy improved by about 10 percent. That effect may be even more profound in an impaired brain. When the monkeys played the same game after receiving a hit of cocaine, their performance dropped by about 20 percent. But electrical stimulation restored their accuracy to normal levels.
Dementia involves far more complicated circuitry than these two layers of the brain. But once scientists better understand exactly how dementia works, it may be possible to combine several implants to each target a specific region.
BLINDNESS
Gene therapy converts cells into photoreceptors, restoring eyesight
Millions of people lose their eyesight when disease damages the photoreceptor cells in their retinas. These cells, called rods and cones, play a pivotal role in vision: They convert incoming light into electrical impulses that the brain interprets as an image.
In recent years, a handful of companies have developed electrode-array implants that bypass the damaged cells. A microprocessor translates information from a video camera into electric pulses that stimulate the retina; as a result, blind subjects in clinical trials have been able to distinguish objects and even read very large type. But the implanted arrays have one big drawback: They stimulate only a small number of retinal cells—about 60 out of 100,000—which ultimately limits a person’s visual resolution.
A gene therapy being developed by Michigan-based RetroSense could replace thousands of damaged retinal cells. The company’s technology targets the layer of the retina containing ganglion cells. Normally, ganglion cells transmit the electric signal from the rods and cones to the brain. But RetroSense inserts a gene that makes the ganglion cells sensitive to light; they take over the job of the photoreceptors. So far, scientists have successfully tested the technology on rodents and monkeys. In rat studies, the gene therapy allowed the animals to see well enough to detect the edge of a platform as they neared it.
The company plans to launch the first clinical trial of the technology next year, with nine subjects blinded by a disease called retinitis pigmentosa. Unlike the surgeries to implant electrode arrays, the procedure to inject gene therapy will take just minutes and requires only local anesthesia. “The visual signal that comes from the ganglion cells may not be encoded in exactly the fashion that they’re used to,” says Peter Francis, chief medical officer of RetroSense. “But what is likely to happen is that their brain is going to adapt.”
PARALYSIS
A brain-machine interface controls limbs while sensing what they touch
Last year, clinical trials involving brain implants gave great hope to people with severe spinal cord injuries. Two paralyzed subjects imagined picking up a cup of coffee. Electrode arrays decoded those neural instructions in real time and sent them to a robotic arm, which brought the coffee to their lips.
But to move limbs with any real precision, the brain also requires tactile feedback. Miguel Nicolelis, a biomedical engineer at Duke University, has now demonstrated that brain-machine interfaces can simultaneously control motion and relay a sense of touch—at least in virtual reality.
For the experiment, Nicolelis’s team inserted electrodes in two brain areas in monkeys: the motor cortex, which controls movement, and the nearby somatosensory cortex, which interprets touch signals from the outside world. Then the monkeys played a computer game in which they controlled a virtual arm—first by using a joystick and eventually by simply imagining the movement. The arm could touch three identical-looking gray circles. But each circle had a different virtual “texture” that sent a distinct electrical pattern to the monkeys’ somatosensory cortex. The monkeys learned to select the texture that produced a treat, proving that the implant was both sending and receiving neural messages.
This year, a study in Brazil will test the ability of 10 to 20 patients with spinal cord injuries to control an exoskeleton using the implant. Nicolelis, an ardent fan of Brazilian soccer, has set a strict timetable for his team: A nonprofit consortium he created, the Walk Again Project, plans to outfit a paraplegic man with a robotic exoskeleton and take him to the 2014 World Cup in São Paulo, where he will deliver the opening kick.
DEAFNESS
Stem cells repair a damaged auditory nerve, improving hearing
Over the past 25 years, more than 30,000 people with hearing loss have received an electronic implant that replaces the cochlea, the snail-shaped organ in the inner ear whose cells transform sound waves into electrical signals. The device acts as a microphone, picking up sounds from the environment and transmitting them to the auditory nerve, which carries them on to the brain.
But a cochlear implant won’t help the 10 percent of people whose profound hearing loss is caused by damage to the auditory nerve. Fortunately for this group, a team of British scientists has found a way to restore that nerve using stem cells.
The researchers exposed human embryonic stem cells to growth factors, substances that cause them to differentiate into the precursors of auditory neurons. Then they injected some 50,000 of these cells into the cochleas of gerbils whose auditory nerves had been damaged. (Gerbils are often used as models of deafness because their range of hearing is similar to that of people.) Three months after the transplant, about one third of the original number of auditory neurons had been restored; some appeared to form projections that connected to the brain stem. The animals’ hearing improved, on average, by 46 percent.
It will be years before the technique is tested in humans. Once it is, researchers say, it has the potential to help not only those with nerve damage but also people with more widespread impairment whose auditory nerve must be repaired in order to receive a cochlear implant.

How Neuroscience Will Fight Five Age-Old Afflictions

SEIZURES

A device delivers targeted drugs to calm overactive neurons

For years, large clinical trials have treated people with epilepsy using so-called deep-brain stimulation: surgically implanted electrodes that can detect a seizure and stop it with an electrical jolt. The technology leads to a 69 percent reduction in seizures after five years, according to the latest results.

Tracy Cui, a biomedical engineer at the University of Pittsburgh, hopes to improve upon that statistic. Her group has designed an electrode that would deliver both an electrical pulse and antiseizure medication. “We know where we want to apply the drug,” Cui says, “so you would not need a lot of it.”

To build the device, Cui’s team immersed a metal electrode in a solution containing two key ingredients: a molecule called a monomer and the drug CNQX. Zapping the solution with electricity causes the monomers to link together and form a long chain called a polymer. Because the polymer is positively charged, it attracts the negatively charged CNQX, leaving the engineers with their target product: an electrode coated in a film that’s infused with the drug.

The researchers then placed the electrodes in a petri dish with rat neurons. Another zap of electricity disrupted the electrostatic attraction in the film, causing the polymer to release its pharmacological payload—and nearby cells to quiet their erratic firing patterns. Cui says her team has successfully repeated the experiment in living rats. Next, she’d like to test the electrodes in epileptic rats and then begin the long process of regulatory approval for human use.

The body’s blood-brain barrier protects the organ from everything but the smallest molecules, rendering most drugs ineffective. As a result, this drug-​delivery mechanism could treat other brain disorders, Cui says. The electrodes can be loaded with any kind of small drug—like dopamine or painkillers—making it useful for treating Parkinson’s disease, chronic pain, or even drug addiction.

DEMENTIA

Electrode arrays stimulate mental processing

Dementia is one of the most well-known and frustrating brain afflictions. It damages many of the fundamental cognitive functions that make us human: working memory, decision-making, language, and logical reasoning. Alzheimer’s, Huntington’s, and Parkinson’s diseases all lead to dementia, and it’s also sometimes associated with multiple sclerosis, AIDS, and the normal process of aging.

Theodore Berger, a biomedical engineer at the University of Southern California, hopes to help people stave off the symptoms of dementia with a device implanted in the brain’s prefrontal cortex, a region crucial for sophisticated cognition. He and colleagues at Wake Forest Baptist Medical Center tested the device in a study involving five monkeys and a memory game.

First the team implanted an electrode array so that it could record from layers 2/3 and 5 of the prefrontal cortex and stimulate layer 5. The neural signals that jet back and forth between these areas relate to attention and decision-making. The team then trained the monkeys to play a computer game in which they saw a cartoon picture—such as a truck, lion, or paint palette—and had to select the same image from a panel of pictures 90 seconds later.

The scientists initially analyzed the electrical signals sent between the two cortical layers when the monkeys made a correct match. In later experiments, the team caused the array to emit the same signal just before the monkey made its decision. The animals’ accuracy improved by about 10 percent. That effect may be even more profound in an impaired brain. When the monkeys played the same game after receiving a hit of cocaine, their performance dropped by about 20 percent. But electrical stimulation restored their accuracy to normal levels.

Dementia involves far more complicated circuitry than these two layers of the brain. But once scientists better understand exactly how dementia works, it may be possible to combine several implants to each target a specific region.

BLINDNESS

Gene therapy converts cells into photoreceptors, restoring eyesight

Millions of people lose their eyesight when disease damages the photoreceptor cells in their retinas. These cells, called rods and cones, play a pivotal role in vision: They convert incoming light into electrical impulses that the brain interprets as an image.

In recent years, a handful of companies have developed electrode-array implants that bypass the damaged cells. A microprocessor translates information from a video camera into electric pulses that stimulate the retina; as a result, blind subjects in clinical trials have been able to distinguish objects and even read very large type. But the implanted arrays have one big drawback: They stimulate only a small number of retinal cells—about 60 out of 100,000—which ultimately limits a person’s visual resolution.

A gene therapy being developed by Michigan-based RetroSense could replace thousands of damaged retinal cells. The company’s technology targets the layer of the retina containing ganglion cells. Normally, ganglion cells transmit the electric signal from the rods and cones to the brain. But RetroSense inserts a gene that makes the ganglion cells sensitive to light; they take over the job of the photoreceptors. So far, scientists have successfully tested the technology on rodents and monkeys. In rat studies, the gene therapy allowed the animals to see well enough to detect the edge of a platform as they neared it.

The company plans to launch the first clinical trial of the technology next year, with nine subjects blinded by a disease called retinitis pigmentosa. Unlike the surgeries to implant electrode arrays, the procedure to inject gene therapy will take just minutes and requires only local anesthesia. “The visual signal that comes from the ganglion cells may not be encoded in exactly the fashion that they’re used to,” says Peter Francis, chief medical officer of RetroSense. “But what is likely to happen is that their brain is going to adapt.”

PARALYSIS

A brain-machine interface controls limbs while sensing what they touch

Last year, clinical trials involving brain implants gave great hope to people with severe spinal cord injuries. Two paralyzed subjects imagined picking up a cup of coffee. Electrode arrays decoded those neural instructions in real time and sent them to a robotic arm, which brought the coffee to their lips.

But to move limbs with any real precision, the brain also requires tactile feedback. Miguel Nicolelis, a biomedical engineer at Duke University, has now demonstrated that brain-machine interfaces can simultaneously control motion and relay a sense of touch—at least in virtual reality.

For the experiment, Nicolelis’s team inserted electrodes in two brain areas in monkeys: the motor cortex, which controls movement, and the nearby somatosensory cortex, which interprets touch signals from the outside world. Then the monkeys played a computer game in which they controlled a virtual arm—first by using a joystick and eventually by simply imagining the movement. The arm could touch three identical-looking gray circles. But each circle had a different virtual “texture” that sent a distinct electrical pattern to the monkeys’ somatosensory cortex. The monkeys learned to select the texture that produced a treat, proving that the implant was both sending and receiving neural messages.

This year, a study in Brazil will test the ability of 10 to 20 patients with spinal cord injuries to control an exoskeleton using the implant. Nicolelis, an ardent fan of Brazilian soccer, has set a strict timetable for his team: A nonprofit consortium he created, the Walk Again Project, plans to outfit a paraplegic man with a robotic exoskeleton and take him to the 2014 World Cup in São Paulo, where he will deliver the opening kick.

DEAFNESS

Stem cells repair a damaged auditory nerve, improving hearing

Over the past 25 years, more than 30,000 people with hearing loss have received an electronic implant that replaces the cochlea, the snail-shaped organ in the inner ear whose cells transform sound waves into electrical signals. The device acts as a microphone, picking up sounds from the environment and transmitting them to the auditory nerve, which carries them on to the brain.

But a cochlear implant won’t help the 10 percent of people whose profound hearing loss is caused by damage to the auditory nerve. Fortunately for this group, a team of British scientists has found a way to restore that nerve using stem cells.

The researchers exposed human embryonic stem cells to growth factors, substances that cause them to differentiate into the precursors of auditory neurons. Then they injected some 50,000 of these cells into the cochleas of gerbils whose auditory nerves had been damaged. (Gerbils are often used as models of deafness because their range of hearing is similar to that of people.) Three months after the transplant, about one third of the original number of auditory neurons had been restored; some appeared to form projections that connected to the brain stem. The animals’ hearing improved, on average, by 46 percent.

It will be years before the technique is tested in humans. Once it is, researchers say, it has the potential to help not only those with nerve damage but also people with more widespread impairment whose auditory nerve must be repaired in order to receive a cochlear implant.

Filed under seizures dementia blindness paralysis deafness neuroscience medicine science

91 notes

FDA Approves Clinical Trial of Auditory Brainstem Implant Procedure for Children in the U.S.
L.A.-based House Research Institute and Children’s Hospital Los Angeles announced today that the United States Food and Drug Administration (FDA) has given final approval to begin a clinical trial of an Auditory Brainstem Implant (ABI) procedure for children. The trial is a surgical collaboration sponsored by the House Research Institute in partnership with Children’s Hospital Los Angeles and Vittorio Colletti, MD of the University of Verona Hospital, Verona, Italy.
The ABI was developed at the House Research Institute and is the world’s first successful prosthetic hearing device to stimulate neurons directly at the human brainstem, bypassing the inner ear and hearing nerve entirely. Since the procedure began, more than 1,000 adults worldwide have received the ABI, with surgeons at the House Clinic leading the way.
“This will be the first FDA-approved trial of its kind, and represents a major step forward to bring a sense of hearing to deaf children in the U.S. who are born without a hearing nerve or cochlea (hearing organ) and therefore are unable to benefit from hearing aids or cochlear implants,” said Neil Segil, Ph.D, executive vice president for research, House Research Institute. “Since its development at the House Research Institute in 1979 by Drs. William House and William Hitselberger, the ABI has been successful in providing a sense of sound to many adults in the U.S., however it has never been approved by the FDA for treating deafness in children. This study has the potential to expand the use of this remarkable device, which represents the only effective sensory prosthetic for direct brain stimulation in use today.”
The Pediatric ABI team includes physicians and researchers from the House Research Institute, including Eric Wilkinson, MD, Laurie Eisenberg, Ph.D., Robert Shannon, Ph.D.; Marc Schwartz, MD; Laurel Fisher, Ph.D.; Steve Otto, M.A., and Margaret Winter, M.S., as well as Children’s Hospital Los Angeles’ Mark Krieger, MD and Gordon McComb, MD; and Verona Hospital’s Vittorio Colletti, MD; Marco Carner, MD; and Liliana Colletti, Ph.D.
“We’re excited to have reached this milestone and look forward to being able to offer this amazing technology to children in the United States who currently have no other option for hearing rehabilitation,” said Eric Wilkinson, MD, co-principal investigator and lead physician for the clinical trial.

FDA Approves Clinical Trial of Auditory Brainstem Implant Procedure for Children in the U.S.

L.A.-based House Research Institute and Children’s Hospital Los Angeles announced today that the United States Food and Drug Administration (FDA) has given final approval to begin a clinical trial of an Auditory Brainstem Implant (ABI) procedure for children. The trial is a surgical collaboration sponsored by the House Research Institute in partnership with Children’s Hospital Los Angeles and Vittorio Colletti, MD of the University of Verona Hospital, Verona, Italy.

The ABI was developed at the House Research Institute and is the world’s first successful prosthetic hearing device to stimulate neurons directly at the human brainstem, bypassing the inner ear and hearing nerve entirely. Since the procedure began, more than 1,000 adults worldwide have received the ABI, with surgeons at the House Clinic leading the way.

“This will be the first FDA-approved trial of its kind, and represents a major step forward to bring a sense of hearing to deaf children in the U.S. who are born without a hearing nerve or cochlea (hearing organ) and therefore are unable to benefit from hearing aids or cochlear implants,” said Neil Segil, Ph.D, executive vice president for research, House Research Institute. “Since its development at the House Research Institute in 1979 by Drs. William House and William Hitselberger, the ABI has been successful in providing a sense of sound to many adults in the U.S., however it has never been approved by the FDA for treating deafness in children. This study has the potential to expand the use of this remarkable device, which represents the only effective sensory prosthetic for direct brain stimulation in use today.”

The Pediatric ABI team includes physicians and researchers from the House Research Institute, including Eric Wilkinson, MD, Laurie Eisenberg, Ph.D., Robert Shannon, Ph.D.; Marc Schwartz, MD; Laurel Fisher, Ph.D.; Steve Otto, M.A., and Margaret Winter, M.S., as well as Children’s Hospital Los Angeles’ Mark Krieger, MD and Gordon McComb, MD; and Verona Hospital’s Vittorio Colletti, MD; Marco Carner, MD; and Liliana Colletti, Ph.D.

“We’re excited to have reached this milestone and look forward to being able to offer this amazing technology to children in the United States who currently have no other option for hearing rehabilitation,” said Eric Wilkinson, MD, co-principal investigator and lead physician for the clinical trial.

Filed under brain implants Auditory Brainstem Implant prosthetics hearing device deafness science

175 notes

Diary of becoming an NHS-funded cyborg
From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.
My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.
A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.
They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.
They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.
This is what it will sound like, apparently.
Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.
In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.
I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?
If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.
The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.

Diary of becoming an NHS-funded cyborg

From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.

My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.

A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.

They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.

They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.

This is what it will sound like, apparently.

Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.

In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.

I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?

If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.

The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.

Filed under auditory cortex cochlear implant hearing loss deafness neuroscience science

164 notes





Scripps Research Institute Scientists Identify Molecules in the Ear that Convert Sound into Brain Signals
For scientists who study the genetics of hearing and deafness, finding the exact genetic machinery in the inner ear that responds to sound waves and converts them into electrical impulses, the language of the brain, has been something of a holy grail.
Now this quest has come to fruition. Scientists at The Scripps Research Institute (TSRI) in La Jolla, CA, have identified a critical component of this ear-to-brain conversion—a protein called TMHS. This protein is a component of the so-called mechanotransduction channels in the ear, which convert the signals from mechanical sound waves into electrical impulses transmitted to the nervous system.
“Scientists have been trying for decades to identify the proteins that form mechanotransduction channels,” said Ulrich Mueller, PhD, a professor in the Department of Cell Biology and director of the Dorris Neuroscience Center at TSRI who led the new study, described in the December 7, 2012 issue of the journal Cell.
Not only have the scientists finally found a key protein in this process, but the work also suggests a promising new approach toward gene therapy. In the laboratory, the scientists were able to place functional TMHS into the sensory cells for sound perception of newborn deaf mice, restoring their function. “In some forms of human deafness, there may be a way to stick these genes back in and fix the cells after birth,” said Mueller.
TMHS appears to be the direct link between the spring-like mechanism in the inner ear that responds to sound and the machinery that shoots electrical signals to the brain. When the protein is missing in mice, these signals are not sent to their brains and they cannot perceive sound.
Specific genetic forms of this protein have previously been found in people with common inherited forms of deafness, and this discovery would seem to be the first explanation for how these genetic variations account for hearing loss.

Scripps Research Institute Scientists Identify Molecules in the Ear that Convert Sound into Brain Signals

For scientists who study the genetics of hearing and deafness, finding the exact genetic machinery in the inner ear that responds to sound waves and converts them into electrical impulses, the language of the brain, has been something of a holy grail.

Now this quest has come to fruition. Scientists at The Scripps Research Institute (TSRI) in La Jolla, CA, have identified a critical component of this ear-to-brain conversion—a protein called TMHS. This protein is a component of the so-called mechanotransduction channels in the ear, which convert the signals from mechanical sound waves into electrical impulses transmitted to the nervous system.

“Scientists have been trying for decades to identify the proteins that form mechanotransduction channels,” said Ulrich Mueller, PhD, a professor in the Department of Cell Biology and director of the Dorris Neuroscience Center at TSRI who led the new study, described in the December 7, 2012 issue of the journal Cell.

Not only have the scientists finally found a key protein in this process, but the work also suggests a promising new approach toward gene therapy. In the laboratory, the scientists were able to place functional TMHS into the sensory cells for sound perception of newborn deaf mice, restoring their function. “In some forms of human deafness, there may be a way to stick these genes back in and fix the cells after birth,” said Mueller.

TMHS appears to be the direct link between the spring-like mechanism in the inner ear that responds to sound and the machinery that shoots electrical signals to the brain. When the protein is missing in mice, these signals are not sent to their brains and they cannot perceive sound.

Specific genetic forms of this protein have previously been found in people with common inherited forms of deafness, and this discovery would seem to be the first explanation for how these genetic variations account for hearing loss.

Filed under hearing loss deafness sound waves electrical impulses inner ear hair cells neuroscience science

68 notes


Gene That Causes a Form of Deafness Discovered
Researchers at the University of Cincinnati and Cincinnati Children’s Hospital Medical Center have found a new genetic mutation responsible for deafness and hearing loss associated with Usher syndrome type 1.
These findings, published in the Sept. 30 advance online edition of the journal Nature Genetics, could help researchers develop new therapeutic targets for those at risk for this syndrome.
Partners in the study included the National Institute on Deafness and other Communication Disorders (NIDCD), Baylor College of Medicine and the University of Kentucky.
Usher syndrome is a genetic defect that causes deafness, night-blindness and a loss of peripheral vision through the progressive degeneration of the retina.

(Image credit: GETTY)

Gene That Causes a Form of Deafness Discovered

Researchers at the University of Cincinnati and Cincinnati Children’s Hospital Medical Center have found a new genetic mutation responsible for deafness and hearing loss associated with Usher syndrome type 1.

These findings, published in the Sept. 30 advance online edition of the journal Nature Genetics, could help researchers develop new therapeutic targets for those at risk for this syndrome.

Partners in the study included the National Institute on Deafness and other Communication Disorders (NIDCD), Baylor College of Medicine and the University of Kentucky.

Usher syndrome is a genetic defect that causes deafness, night-blindness and a loss of peripheral vision through the progressive degeneration of the retina.

(Image credit: GETTY)

Filed under brain hearing hearing loss deafness genetics neuroscience science

38 notes

Deaf girl fitted with bionic ear speaks her first word
Evie was born profoundly deaf but it was not until she was 16 months old that tests revealed she had no hearing nerves, meaning an auditory brainstem implant - or bionic ear - was her only chance of ever hearing.
The 23-month-old has Oculo-Auriculo-Vertebral Syndrome (OAV), a very rare condition with no known cause, which affects the eyes, ears and spine.

Deaf girl fitted with bionic ear speaks her first word

Evie was born profoundly deaf but it was not until she was 16 months old that tests revealed she had no hearing nerves, meaning an auditory brainstem implant - or bionic ear - was her only chance of ever hearing.

The 23-month-old has Oculo-Auriculo-Vertebral Syndrome (OAV), a very rare condition with no known cause, which affects the eyes, ears and spine.

Filed under OAV bionic ear deafness hearing implants neuroscience auditory brainstem implantation science

free counters