Posts tagged auditory cortex

Posts tagged auditory cortex

Researchers Discover Link Between Fear and Sound Perception
Anyone who’s ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well – our emotions can actually affect how we hear and process sound. When certain types of sounds become associated in our brains with strong emotions, hearing similar sounds can evoke those same feelings, even far removed from their original context. It’s a phenomenon commonly seen in combat veterans suffering from posttraumatic stress disorder (PTSD), in whom harrowing memories of the battlefield can be triggered by something as common as the sound of thunder. But the brain mechanisms responsible for creating those troubling associations remain unknown. Now, a pair of researchers from the Perelman School of Medicine at the University of Pennsylvania has discovered how fear can actually increase or decrease the ability to discriminate among sounds depending on context, providing new insight into the distorted perceptions of victims of PTSD. Their study is published in Nature Neuroscience.
“Emotions are closely linked to perception and very often our emotional response really helps us deal with reality,” says senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at Penn. “For example, a fear response helps you escape potentially dangerous situations and react quickly. But there are also situations where things can go wrong in the way the fear response develops. That’s what happens in anxiety and also in PTSD — the emotional response to the events is generalized to the point where the fear response starts getting developed to a very broad range of stimuli.”
Geffen and the first author of the study, Mark Aizenberg, PhD, a postdoctoral researcher in her laboratory, used emotional conditioning in mice to investigate how hearing acuity (the ability to distinguish between tones of different frequencies) can change following a traumatic event, known as emotional learning. In these experiments, which are based on classical (Pavlovian) conditioning, animals learn to distinguish between potentially dangerous and safe sounds — called “emotional discrimination learning.” This type of conditioning tends to result in relatively poor learning, but Aizenberg and Geffen designed a series of learning tasks intended to create progressively greater emotional discrimination in the mice, varying the difficulty of the task. What really interested them was how different levels of emotional discrimination would affect hearing acuity – in other words, how emotional responses affect perception and discrimination of sounds. This study established the link between emotions and perception of the world – something that has not been understood before.
The researchers found that, as expected, fine emotional learning tasks produced greater learning specificity than tests in which the tones were farther apart in frequency. As Geffen explains, “The animals presented with sounds that were very far apart generalize the fear that they developed to the danger tone over a whole range of frequencies, whereas the animals presented with the two sounds that were very similar exhibited specialization of their emotional response. Following the fine conditioning task, they figured out that it’s a very narrow range of pitches that are potentially dangerous.”
When pitch discrimination abilities were measured in the animals, the mice with more specific responses displayed much finer auditory acuity than the mice who were frightened by a broader range of frequencies. “There was a relationship between how much their emotional response generalized and how well they could tell different tones apart,” says Geffen. “In the animals that specialized their emotional response, pitch discrimination actually became sharper. They could discriminate two tones that they previously could not tell apart.”
Another interesting finding of this study is that the effects of emotional learning on hearing perception were mediated by a specific brain region, the auditory cortex. The auditory cortex has been known as an important area responsible for auditory plasticity. Surprisingly, Aizenberg and Geffen found that the auditory cortex did not play a role in emotional learning. Likely, the specificity of emotional learning is controlled by the amygdala and sub-cortical auditory areas. “We know the auditory cortex is involved, we know that the emotional response is important so the amygdala is involved, but how do the amygdala and cortex interact together?” says Geffen. “Our hypothesis is that the amygdala and cortex are modifying subcortical auditory processing areas. The sensory cortex is responsible for the changes in frequency discrimination, but it’s not necessary for developing specialized or generalized emotional responses. So it’s kind of a puzzle.”
Solving that puzzle promises new insight into the causes and possible treatment of PTSD, and the question of why some individuals develop it and others subjected to the same events do not. “We think there’s a strong link between mechanisms that control emotional learning, including fear generalization, and the brain mechanisms responsible for PTSD, where generalization of fear is abnormal,” Geffen notes. Future research will focus on defining and studying that link.
Why Music Makes Our Brain Sing
MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.
In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.
So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?
The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.
More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.
When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.
But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.
The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.
To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.
We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.
But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.
Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.
The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.
We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.
These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.
So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.
Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.
In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?
When that answer is yes, there is little — in those moments of listening, at least — that we value more.
Help at hand for schizophrenics
Researchers from the Bergen fMRI Group at the University of Bergen (UiB) are working on how to help schizophrenics, who hear voices. The way they do this is by studying people who also hear voices, but who do not suffer from a mental illness. For a five-year period, the group is studying the brain processes causing people to hear voices. A recent report published in Frontiers in Human Neuroscience shows some of the group’s startling results.
– We have found that the primary auditory cortex of healthy people who hear voices, responds less to outside stimulus than the corresponding area of the brain in people who don’t hear voices, says Post Doctor Kristiina Kompus.
Kompus, who works at UiB’s Department of Biological and Medical Psychology, is lead author of the just published study.
The primary auditory cortex is the region of the brain that processes sound. Kompus’ study shows that healthy people who hear voices share some attributes with schizophrenics, as the cortical region in both groups reacts less to outside stimulus.
However, there is an important difference between people who hear voices. Whilst those with schizophrenia have a reduced ability to regulate the primary auditory cortex using cognitive control, those who hear voices but are healthy are able to do so.
– Because of this cognitive control, healthy people who hear voices are able to direct their attention outwards. This sets them apart from schizophrenics, who have a tendency to direct their attention inwards due to their decreased ability to regulate their primary auditory cortex, says Kompus before adding.
– These discoveries have brought us one step close to understanding the hallucinations of schizophrenics and why the voices become a problem for some people but not for others.
So what is the next step for Kompus and her fellow researchers?
– We will do further research on the brain structure of people with auditory hallucinations. In particular, we wish to look at the brain’s networks that process outside voices. This is to establish whether these voice hallucinations and the outside voices occur in the same parts of the brain. We also wish to establish if hearing voices is a genetical trait, she says.
According to the researchers, approximately five per cent of us hear voices in the head, even if otherwise healthy. This number is based on research from several countries and surveys. For their own research, Kompus and her team used local media in Bergen to call for people who hear voices. The results were overwhelming, with around 30 people getting in touch with the researchers to register for the study.
Brain uses internal ‘average voice’ prototype to identify who is talking
The human brain is able to identify individuals’ voices by comparing them against an internal ‘average voice’ prototype, according to neuroscientists.
A study carried out by researchers at the University of Glasgow and reported in the journal Current Biology demonstrates that voice identity is coded in the brain by reference to two internal voice prototypes – one male, one female.
Voices that have the greatest difference from the prototype are perceived as more distinctive and produce greater neural activity than voices deemed very similar.
The researchers in the Institute of Neuroscience & Psychology conducted the study by generating a voice prototype through morphing 32 same-gender voices together resulting in a smooth, idealised voice with few irregularities.
They then generated different voices by altering the ‘distance-to-mean’ of the prototype voice – for example, changing the tone and pitch or morphing two or more voices together.
Using functional Magnetic Resonance Imaging (fMRI), the researchers were able to see increased neural activity the further from the prototype the voices were.
Professor Pascal Belin said: “Like faces, voices can be used to identify a person, yet the neural basis of this ability remains poorly understood. Here we provide the first evidence of a norm-based coding mechanism the brain uses to identify a speaker.
“The research indicates this is a similar process for the identification of faces, where the brain also uses an average face to compare against other faces it encounters in order to establish identity.
“So, rather than having to remember each single voice it hears every day for a lifetime, the brain facilitates the task of identification by remembering only the differences from the prototype it stores.
“It leads to a range of interesting and important questions, such as whether the prototypes are innate, stored templates or whether they are subject to environmental and cultural influences. Could the prototype consist of an average of all voices experiences during one’s life?”
(Image: Shutterstock)
When a pedestrian hears the screech of a car’s brakes, she has to decide whether, and if so, how, to move in response. Is the action taking place blocks away, or 20 feet to the left?
One of the truly primal mechanisms that we depend on every day of our lives — acting on the basis of information gathered by our sense of hearing — is yielding its secrets to modern neuroscience. A team of researchers from Cold Spring Harbor Laboratory (CSHL) today publishes experimental results in the journal Nature which they describe as surprising. The results fill in a key piece of the puzzle about how mammals act on the basis of sound cues.
It’s well known that sounds detected by the ears wind up in a part of the brain called the auditory cortex, where they are translated – transduced – into information that scientists call representations. These representations, in turn, form the informational basis upon which other parts of the brain can make decisions and issue commands for specific actions. What scientists have not understood is what happens between the auditory cortex and portions of the brain that ultimately issue commands, say, for muscles to move in response to the sound of that car’s screeching brakes.
To find out, CSHL Professor Anthony Zador and Dr. Petr Znamenskiy trained rats to listen to sounds and to make decisions based on those sounds. When a high-frequency sound is played, the animals are rewarded if they move to the left. When the sound is low-pitched, the reward is given if the animal moves right.

To the striatum
On the simplest level, says Zador, “we know that sound is coming into the ear; and we know what’s coming out in the end – a decision,” in the form of a muscle movement. The surprise, he says, is the destination of the information used by the animal to perform this task of discriminating between sounds of high and low frequency, as revealed in his team’s experiments.
“It turns out the information passes through a particular subset of neurons in the auditory cortex whose axons wind up in another part of the brain, called the striatum,” says Zador. The classic series of experiments that provided inspiration and a model for this work, performed at Stanford University by William Newsome and colleagues, involved the visual system of primates, and had led Zador to expect by analogy that representations formed in the auditory cortex would lead to other locations within the cortex.
These experiments in rats have implications for how neural circuits make decisions, according to Zador. Even though many neurons in auditory cortex are “tuned” to low or high frequencies, most do not transmit their information directly to the striatum. Rather, their information is transmitted by a much smaller number of neurons in their vicinity, which convey their “votes” directly to the striatum.
“This is like the difference between a direct democracy and a representative democracy, of the type we have in the United States,” Zador explains. “In a direct democracy model of how the auditory cortex conveys information to the rest of the brain, every neuron activated by a low- or high-pitched sound would have a ‘vote.’ Since there is noise in every perception, some minority of neurons will indicate ‘low’ when the sound is in fact ‘high,’ and vice-versa. In the direct democracy model, the information sent to the striatum for further action would be the equivalent of a simple sum of all these votes.
“In contrast – and this is what we found to be the case – the neurons registering ‘high’ and ‘low’ are represented by a specialized subset of neurons in their local area, which we might liken to members of Congress or the Electoral College: these in turn transmit the votes of the larger population to the place — in this case the auditory striatum — in which decisions are made and actions are taken.”
(Source: cshl.edu)
Congenital amusia is a disorder characterized by impaired musical skills, which can extend to an inability to recognize very familiar tunes. The neural bases of this deficit are now being deciphered. According to a study conducted by researchers from CNRS and Inserm at the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1), amusics exhibit altered processing of musical information in two regions of the brain: the auditory cortex and the frontal cortex, particularly in the right cerebral hemisphere. These alterations seem to be linked to anatomical anomalies in these same cortices. This work, published in May in the journal Brain, adds invaluable information to our understanding of amusia and, more generally, of the “musical brain”, in other words the cerebral networks involved in the processing of music.

Congenital amusia, which affects between 2 and 4% of the population, can manifest itself in various ways: by difficulty in hearing a “wrong note”, by singing “out of tune” and sometimes by an aversion to music. For some of these individuals, music is like a foreign language or a simple noise. Amusia is not due to any auditory or psychological problem and does not seem to be linked to other neurological disorders. Research on the neural bases of this impairment only began a decade ago with the work of the Canadian neuropsychologist Isabelle Peretz.
Two teams from the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1) have studied the encoding of musical information and the short-term memorization of notes. According to previous work, amusical individuals experience particular difficulty in hearing the pitch of notes (low or high) and, although they remember sequences of words normally, they have difficulty in memorizing sequences of notes.
In a bid to determine the regions of the brain concerned with these memorization difficulties, the researchers conducted magneto-encephalographs (a technique that allows very weak magnetic fields produced by neural activity to be measured at the surface of the head) on a group of amusics while they were performing a musical task. The task consisted in listening to two tunes separated by a two-second gap. The volunteers were asked to determine whether the tunes were identical or different.
The scientists observed that, when hearing and memorizing notes, amusics exhibited altered sound processing in two regions of the brain: the auditory cortex and the frontal cortex, essentially in the right hemisphere. Compared to non-amusics, their neural activity was delayed and impaired in these specific areas when encoding musical notes. These anomalies occurred 100 milliseconds after the start of a note.
These results agree with an anatomical observation that the researchers have confirmed using MRI: amusical individuals have an excess of grey matter in the inferior frontal cortex, accompanied by a deficit in white matter, one of whose essential constituents is myelin. This surrounds and protects the axons of the neurons, helping nerve signals to propagate rapidly. The researchers also observed anatomical anomalies in the auditory cortex. This data lends weight to the hypothesis according to which amusia could be due to insufficient communication between the auditory cortex and the frontal cortex.
Amusia thus stems from impaired neural processing from the very first steps of sound processing in the auditory nervous system. This work makes it possible to envisage a program to remedy these musical difficulties, by targeting the early steps of the processing of sounds and their memorization.
(Source: www2.cnrs.fr)
New study shows what happens in the brain to make music rewarding
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital – The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music.
Participants in the study listened to 60 previously unheard music excerpts while undergoing functional resonance imaging (fMRI) scanning, providing bids of how much they were willing to spend for each item in an auction paradigm. “When people listen to a piece of music they have never heard before, activity in one brain region can reliably and consistently predict whether they will like or buy it, this is the nucleus accumbens which is involved in forming expectations that may be rewarding,” says lead investigator Dr. Valorie Salimpoor, who conducted the research in Dr. Robert Zatorre’s lab at The Neuro and is now at Baycrest Health Sciences’ Rotman Research Institute. “What makes music so emotionally powerful is the creation of expectations. Activity in the nucleus accumbens is an indicator that expectations were met or surpassed, and in our study we found that the more activity we see in this brain area while people are listening to music, the more money they are willing to spend.”
The second important finding is that the nucleus accumbens doesn’t work alone, but interacts with the auditory cortex, an area of the brain that stores information about the sounds and music we have been exposed to. The more a given piece was rewarding, the greater the cross-talk between these regions. Similar interactions were also seen between the nucleus accumbens and other brain areas, involved in high-level sequencing, complex pattern recognition and areas involved in assigning emotional and reward value to stimuli.
In other words, the brain assigns value to music through the interaction of ancient dopaminergic reward circuitry, involved in reinforcing behaviours that are absolutely necessary for our survival such as eating and sex, with some of the most evolved regions of the brain, involved in advanced cognitive processes that are unique to humans.
“This is interesting because music consists of a series of sounds that when considered alone have no inherent value, but when arranged together through patterns over time can act as a reward, says Dr. Robert Zatorre, researcher at The Neuro and co-director of the International Laboratory for Brain, Music and Sound Research. “The integrated activity of brain circuits involved in pattern recognition, prediction, and emotion allow us to experience music as an aesthetic or intellectual reward.”
“The brain activity in each participant was the same when they were listening to music that they ended up purchasing, although the pieces they chose to buy were all different,” adds Dr. Salimpoor. “These results help us to see why people like different music – each person has their own uniquely shaped auditory cortex, which is formed based on all the sounds and music heard throughout our lives. Also, the sound templates we store are likely to have previous emotional associations.”
An innovative aspect of this study is how closely it mimics real-life music-listening experiences. Researchers used a similar interface and prices as iTunes. To replicate a real life scenario as much as possible and to assess reward value objectively, individuals could purchase music with their own money, as an indication that they wanted to hear it again. Since musical preferences are influenced by past associations, only novel music excerpts were selected (to minimize explicit predictions) using music recommendation software (such as Pandora, Last.fm) to reflect individual preferences.
The interactions between nucleus accumbens and the auditory cortex suggest that we create expectations of how musical sounds should unfold based on what is learned and stored in our auditory cortex, and our emotions result from the violation or fulfillment of these expectations. We are constantly making reward-related predictions to survive, and this study provides neurobiological evidence that we also make predictions when listening to an abstract stimulus, music, even if we have never heard the music before. Pattern recognition and prediction of an otherwise simple set of stimuli, when arranged together become so powerful as to make us happy or bring us to tears, as well as communicate and experience some of the most intense, complex emotions and thoughts.
(Image: Peter Finnie and Ben Beheshti)

How do we hear? More specifically, how does the auditory center of the brain discern important sounds – such as communication from members of the same species – from relatively irrelevant background noise? The answer depends on the regulation of sound by specific neurons in the auditory cortex of the brain, but the precise mechanisms of those neurons have remained unclear. Now, a new study from the Perelman School of Medicine at the University of Pennsylvania has isolated how neurons in the rat’s primary auditory cortex (A1) preferentially respond to natural vocalizations from other rats over intentionally modified vocalizations (background sounds). A computational model developed by the study authors, which successfully predicted neuronal responses to other new sounds, explained the basis for this preference. The research is published in the Journal of Neurophysiology.
Rats communicate with each other mostly through ultrasonic vocalizations (USVs) beyond the range of human hearing. Although the existence of these USV conversations has been known for decades, “the acoustic richness of them has only been discovered in the last few years,” said senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery at Penn. That acoustical complexity raises questions as to how the animal brain recognizes and responds to the USVs. “We set out to characterize the responses of neurons to USVs and to come up with a model that would explain the mechanism that makes these neurons preferentially responsive to these relevant sounds.”
Geffen and her colleagues obtained recordings of USVs from two rats kept together in a cage, then played the recordings to a separate group of male rats, while their neuronal responses were acquired and recorded. The researchers also used USV recordings that were modified in several ways, such as having background sounds filtered out and being played backwards and at different speeds to mimic unimportant background noise. “We found that neurons in the auditory cortex respond strongly and selectively to the original ultrasonic vocalizations and not the transformed versions we created,” says Geffen.
Using the data collected on the responses of A1 neurons to various USVs, the researchers developed a computational model that could predict the activity of an individual neuron based on the pitch and duration of the USV. Geffen observes that “the details of their responses could be predicted with high accuracy.” It was possible to determine which aspects of the acoustic input best drove individual neurons. Remarkably, it turned out that the acoustic parameters that worked best in driving the neuronal responses corresponded to the statistics of the natural vocalizations rats produce.
The work makes clear for the first time, says Geffen, “the mechanisms of how the auditory system picks out behaviorally relevant sounds, such as same species communication signals, and processes them more effectively than less relevant sounds. This information is fundamental in understanding how sound perception helps animals survive. We conclude that neurons in the auditory cortex are specialized for processing and efficiently responding to natural and behaviorally relevant sounds.”
(Image: National Institute on Deafness and Other Communication)
Diary of becoming an NHS-funded cyborg
From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.
My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.
A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.
They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.
They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.
This is what it will sound like, apparently.
Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.
In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.
I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?
If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.
The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.
AHRF Researcher Describes Cochlear Amplification Using Novel Optical Technique
It has long been known that the inner ear actively amplifies sounds it receives, and that this amplification can be attributed to forces generated by outer hair cells in the cochlea. How the ear actually accomplishes this, however, has remained somewhat of a mystery. Now, Jonathan A. N. Fisher, PhD, and colleagues at The Rockefeller University, in New York, describe how the cochlea actively self-amplifies sound it receives to help increase the range of sounds that can be heard.
Fisher and colleagues used a new optical technique that inactivates prestin, a motor protein involved in the movement of the outer hair cells. The outer hair cells are part of the hair cell bundles (which also include the inner hair cells)- the true sensory cells of the inner ear. The main body of the hair cells sits in the basilar membrane- the tissue that lines the interior of the bony cochlea. The “hair” part of these cells, called the stereocilia, sticks up into the fluid-filled space of the cochlea, where they are pushed by the fluid as sound waves travel through it.
The sound waves traveling down the cochlea produce actual waves that can be observed along the basilar membrane as visualized in the animation (from the Howard Hughes Medical Institute). The cochlea picks up different sound frequencies along its length, with higher frequency sounds picked up at center of the “snail” and the lower frequency sounds being picked up at the part of the cochlea closest to the eardrum.
The outer hair cells have been known to provide amplification of sound waves picked up by the inner hair cells by actively changing their shape to increase the amplitudes of the sound waves. These outer hair cells can do this because the membrane protein can contract and cause the stereocillia to be deflected by the overlying tectorial membrane.
Fisher and colleagues developed a light-sensitive drug that when illuminated by an ultraviolet laser can inactivate prestin in select locations within the cochlea. Using this novel technique, the researchers were able to affect prestin at very specific locations along the basilar membrane.
The researchers found that by inactivating prestin at very specific locations, the sound-evoked waves that carry mechanical signals to sensory hair cells were re-shaped and of smaller amplitude- indicating that without prestin, amplification is dampened compared to what the researchers saw when prestin was allowed to function normally.