Posts tagged sound processing

Posts tagged sound processing
Improving Babies’ Language Skills Before They’re Even Old Enough to Speak
In the first months of life, when babies begin to distinguish sounds that make up language from all the other sounds in the world, they can be trained to more effectively recognize which sounds “might” be language, accelerating the development of the brain maps which are critical to language acquisition and processing, according to new Rutgers research.
The study by April Benasich and colleagues of Rutgers University-Newark is published in the October 1 issue of the Journal of Neuroscience.
The researchers found that when 4-month-old babies learned to pay attention to increasingly complex non-language audio patterns and were rewarded for correctly shifting their eyes to a video reward when the sound changed slightly, their brain scans at 7 months old showed they were faster and more accurate at detecting other sounds important to language than babies who had not been exposed to the sound patterns.
“Young babies are constantly scanning the environment to identify sounds that might be language,” says Benasich, who directs the Infancy Studies Laboratory at the University’s Center for Molecular and Behavioral Neuroscience. “This is one of their key jobs – as between 4 and 7 months of age they are setting up their pre-linguistic acoustic maps. We gently guided the babies’ brains to focus on the sensory inputs which are most meaningful to the formation of these maps.”
Acoustic maps are pools of interconnected brain cells that an infant brain constructs to allow it to decode language both quickly and automatically – and well-formed maps allow faster and more accurate processing of language, a function that is critical to optimal cognitive functioning. Benasich says babies of this particular age may be ideal for this kind of training.
“If you shape something while the baby is actually building it,” she says, “it allows each infant to build the best possible auditory network for his or her particular brain. This provides a stronger foundation for any language (or languages) the infant will be learning. Compare the baby’s reactions to language cues to an adult driving a car. You don’t think about specifics like stepping on the gas or using the turn signal. You just perform them. We want the babies’ recognition of any language-specific sounds they hear to be just that automatic.”
Benasich says she was able to accelerate and optimize the construction of babies’ acoustic maps, as compared to those of infants who either passively listened or received no training, by rewarding the babies with a brief colorful video when they responded to changes in the rapidly varying sound patterns. The sound changes could take just tens of milliseconds, and became more complex as the training progressed.
Looking for lasting improvement in language skills
“While playing this fun game we can convey to the baby, ‘Pay attention to this. This is important. Now pay attention to this. This is important,’” says Benasich, “This process helps the baby to focus tightly on sounds in the environment that ‘may’ have critical information about the language they are learning. Previous research has shown that accurate processing of these tens-of-milliseconds differences in infancy is highly predictive of the child’s language skills at 3, 4 and 5 years.”
The experiment has the potential to provide lasting benefits. The EEG (electroencephalogram) scans showed the babies’ brains processed sound patterns with increasing efficiency at 7 months of age after six weekly training sessions. The research team will follow these infants through 18 months of age to see whether they retain and build upon these abilities with no further training. That outcome would suggest to Benasich that once the child’s earliest acoustic maps are formed in the most optimal way, the benefits will endure.
Benasich says this training has the potential to advance the development of typically developing babies as well as children at higher risk for developmental language difficulties. For parents who think this might turn their babies into geniuses, the answer is – not necessarily. Benasich compares the process of enhancing acoustic maps to some people’s wishes to be taller. “There’s a genetic range to how tall you become – perhaps you have the capacity to be 5’6” to 5’9”,” she explains. “If you get the right amounts and types of food, the right environment, the right exercise, you might get to 5’9” but you wouldn’t be 6 feet. The same principle applies here.”
Benasich says it’s very likely that one day parents at home will be able to use an interactive toy-like device – now under development – to mirror what she accomplished in the baby lab and maximize their babies’ potential. For the 8 to 15 percent of infants at highest risk for poor acoustic processing and subsequent delayed language, this baby-friendly behavioral intervention could have far-reaching implications and may offer the promise of improving or perhaps preventing language difficulties.
New Mapping Approach Lets Scientists Zoom In And Out As The Brain Processes Sound
Researchers at Johns Hopkins have mapped the sound-processing part of the mouse brain in a way that keeps both the proverbial forest and the trees in view. Their imaging technique allows zooming in and out on views of brain activity within mice, and it enabled the team to watch brain cells light up as mice “called” to each other. The results, which represent a step toward better understanding how our own brains process language, appear online July 31 the journal Neuron.
In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound. They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were. The neurons seemed to be laid out in neatly organized bands, each responding to a different tone. More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the well-manicured arrangement of bands might be an illusion. But, says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine, “You could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain.” Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.
To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds. Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex. “With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.
With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other. “Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”
Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.
Nanopores underlie our ability to tune in to a single voice
Inner-ear membrane uses tiny pores to mechanically separate sounds, researchers find.
Even in a crowded room full of background noise, the human ear is remarkably adept at tuning in to a single voice — a feat that has proved remarkably difficult for computers to match. A new analysis of the underlying mechanisms, conducted by researchers at MIT, has provided insights that could ultimately lead to better machine hearing, and perhaps to better hearing aids as well.
Our ears’ selectivity, it turns out, arises from evolution’s precise tuning of a tiny membrane, inside the inner ear, called the tectorial membrane. The viscosity of this membrane — its firmness, or lack thereof — depends on the size and distribution of tiny pores, just a few tens of nanometers wide. This, in turn, provides mechanical filtering that helps to sort out specific sounds.
The new findings are reported in the Biophysical Journal by a team led by MIT graduate student Jonathan Sellon, and including research scientist Roozbeh Ghaffari, former graduate student Shirin Farrahi, and professor of electrical engineering Dennis Freeman. The team collaborated with biologist Guy Richardson of the University of Sussex.
Elusive understanding
In discriminating among competing sounds, the human ear is “extraordinary compared to conventional speech- and sound-recognition technologies,” Freeman says. The exact reasons have remained elusive — but the importance of the tectorial membrane, located inside the cochlea, or inner ear, has become clear in recent years, largely through the work of Freeman and his colleagues. Now it seems that a flawed assumption contributed to the longstanding difficulty in understanding the importance of this membrane.
Much of our ability to differentiate among sounds is frequency-based, Freeman says — so researchers had “assumed that the better we could resolve frequency, the better we could hear.” But this assumption turns out not always to be true.
In fact, Freeman and his co-authors previously found that tectorial membranes with a certain genetic defect are actually highly sensitive to variations in frequency — and the result is worse hearing, not better.
The MIT team found “a fundamental tradeoff between how well you can resolve different frequencies and how long it takes to do it,” Freeman explains. That makes the finer frequency discrimination too slow to be useful in real-world sound selectivity.
Too fast for neurons
Previous work by Freeman and colleagues has shown that the tectorial membrane plays a fundamental role in sound discrimination by carrying waves that stimulate a particular kind of sensory receptor. This process is essential in deciphering competing sounds, but it takes place too quickly for neural processes to keep pace. Nature, over the course of evolution, appears to have produced a very effective electromechanical system, Freeman says, that can keep up with the speed of these sound waves.
The new work explains how the membrane’s structure determines how well it filters sound. The team studied two genetic variants that cause nanopores within the tectorial membrane to be smaller or larger than normal. The pore size affects the viscosity of the membrane and its sensitivity to different frequencies.
The tectorial membrane is spongelike, riddled with tiny pores. By studying how its viscosity varies with pore size, the team was able to determine that the typical pore size observed in mice — about 40 nanometers across — represents an optimal size for combining frequency discrimination with overall sensitivity. Pores that are larger or smaller impair hearing.
“It really changes the way we think about this structure,” Ghaffari says. The new findings show that fluid viscosity and pores are actually essential to its performance. Changing the sizes of tectorial membrane nanopores, via biochemical manipulation or other means, can provide unique ways to alter hearing sensitivity and frequency discrimination.
William Brownell, a professor of otolaryngology at Baylor College of Medicine, says, “This is the first study to suggest that porosity may affect cochlear tuning.” This work, he adds, “could provide insight” into the development of specific hearing problems.
Remember that sound bite you heard on the radio this morning? The grocery items your spouse asked you to pick up? Chances are, you won’t.
Researchers at the University of Iowa have found that when it comes to memory, we don’t remember things we hear nearly as well as things we see or touch.
“As it turns out, there is merit to the Chinese proverb ‘I hear, and I forget; I see, and I remember,” says lead author of the study and UI graduate student, James Bigelow.
“We tend to think that the parts of our brain wired for memory are integrated. But our findings indicate our brain may use separate pathways to process information. Even more, our study suggests the brain may process auditory information differently than visual and tactile information, and alternative strategies—such as increased mental repetition—may be needed when trying to improve memory,” says Amy Poremba, associate professor in the UI Department of Psychology and corresponding author on the paper, published this week in the journal PLoS One.
Bigelow and Poremba discovered that when more than 100 UI undergraduate students were exposed to a variety of sounds, visuals, and things that could be felt, the students were least apt to remember the sounds they had heard.
In an experiment testing short-term memory, participants were asked to listen to pure tones they heard through headphones, look at various shades of red squares, and feel low-intensity vibrations by gripping an aluminum bar. Each set of tones, squares and vibrations was separated by time delays ranging from one to 32 seconds.
Although students’ memory declined across the board when time delays grew longer, the decline was much greater for sounds, and began as early as four to eight seconds after being exposed to them.
While this seems like a short time span, it’s akin to forgetting a phone number that wasn’t written down, notes Poremba. “If someone gives you a number, and you dial it right away, you are usually fine. But do anything in between, and the odds are you will have forgotten it,” she says.
In a second experiment, Bigelow and Poremba tested participants’ memory using things they might encounter on an everyday basis. Students listened to audio recordings of dogs barking, watched silent videos of a basketball game, and touched and held common objects blocked from view, such as a coffee mug. The researchers found that between an hour and a week later, students were worse at remembering the sounds they had heard, but their memory for visual scenes and tactile objects was about the same.
Both experiments suggest that the way your mind processes and stores sound may be different from the way it process and stores other types of memories. And that could have big implications for educators, design engineers, and advertisers alike.
“As teachers, we want to assume students will remember everything we say. But if you really want something to be memorable you may need to include a visual or hands-on experience, in addition to auditory information,” says Poremba.
Previous research has suggested that humans may have superior visual memory, and that hearing words associated with sounds—rather than hearing the sounds alone—may aid memory. Bigelow and Poremba’s study builds upon those findings by confirming that, indeed, we remember less of what we hear, regardless of whether sounds are linked to words.
The study also is the first to show that our ability to remember what we touch is roughly equal to our ability to remember what we see. The finding is important, because experiments with nonhuman primates such as monkeys and chimpanzees have shown that they similarly excel at visual and tactile memory tasks, but struggle with auditory tasks. Based on these observations, the authors believe humans’ weakness for remembering sounds likely has its roots in the evolution of the primate brain.