Posts tagged sound

Posts tagged sound
Remember that sound bite you heard on the radio this morning? The grocery items your spouse asked you to pick up? Chances are, you won’t.
Researchers at the University of Iowa have found that when it comes to memory, we don’t remember things we hear nearly as well as things we see or touch.
“As it turns out, there is merit to the Chinese proverb ‘I hear, and I forget; I see, and I remember,” says lead author of the study and UI graduate student, James Bigelow.
“We tend to think that the parts of our brain wired for memory are integrated. But our findings indicate our brain may use separate pathways to process information. Even more, our study suggests the brain may process auditory information differently than visual and tactile information, and alternative strategies—such as increased mental repetition—may be needed when trying to improve memory,” says Amy Poremba, associate professor in the UI Department of Psychology and corresponding author on the paper, published this week in the journal PLoS One.
Bigelow and Poremba discovered that when more than 100 UI undergraduate students were exposed to a variety of sounds, visuals, and things that could be felt, the students were least apt to remember the sounds they had heard.
In an experiment testing short-term memory, participants were asked to listen to pure tones they heard through headphones, look at various shades of red squares, and feel low-intensity vibrations by gripping an aluminum bar. Each set of tones, squares and vibrations was separated by time delays ranging from one to 32 seconds.
Although students’ memory declined across the board when time delays grew longer, the decline was much greater for sounds, and began as early as four to eight seconds after being exposed to them.
While this seems like a short time span, it’s akin to forgetting a phone number that wasn’t written down, notes Poremba. “If someone gives you a number, and you dial it right away, you are usually fine. But do anything in between, and the odds are you will have forgotten it,” she says.
In a second experiment, Bigelow and Poremba tested participants’ memory using things they might encounter on an everyday basis. Students listened to audio recordings of dogs barking, watched silent videos of a basketball game, and touched and held common objects blocked from view, such as a coffee mug. The researchers found that between an hour and a week later, students were worse at remembering the sounds they had heard, but their memory for visual scenes and tactile objects was about the same.
Both experiments suggest that the way your mind processes and stores sound may be different from the way it process and stores other types of memories. And that could have big implications for educators, design engineers, and advertisers alike.
“As teachers, we want to assume students will remember everything we say. But if you really want something to be memorable you may need to include a visual or hands-on experience, in addition to auditory information,” says Poremba.
Previous research has suggested that humans may have superior visual memory, and that hearing words associated with sounds—rather than hearing the sounds alone—may aid memory. Bigelow and Poremba’s study builds upon those findings by confirming that, indeed, we remember less of what we hear, regardless of whether sounds are linked to words.
The study also is the first to show that our ability to remember what we touch is roughly equal to our ability to remember what we see. The finding is important, because experiments with nonhuman primates such as monkeys and chimpanzees have shown that they similarly excel at visual and tactile memory tasks, but struggle with auditory tasks. Based on these observations, the authors believe humans’ weakness for remembering sounds likely has its roots in the evolution of the primate brain.
The ability to localize the source of sound is important for navigating the world and for listening in noisy environments like restaurants, an action that is particularly difficult for elderly or hearing impaired people. Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate.

A recent paper by Mass. Eye and Ear/Harvard Medical School researchers in collaboration with researchers at the Ecole Normale Superieure, France, challenge the two dominant theories of how people localize sounds, explain why neuronal responses to sounds are so diverse and show how sound can be localized, even with the absence of one half of the brain. Their research is described on line in the journal eLife.
“Progress has been made in laboratory settings to understand how sound localization works, but in the real world people hear a wide range of sounds with background noise and reflections,” said Dan F. M. Goodman, lead author and post-doctoral fellow in the Eaton-Peabody Laboratories at Mass. Eye and Ear, Harvard Medical School. “Theories based on more realistic environments are important. The theme of the paper is that previous theories about this have been too idealized, and if you use more realistic data, you come to an entirely different conclusion.”
“Two theories have come to dominate our understanding of how the brain localizes sounds: the peak coding theory (which says that only the most strongly responding brain cells are needed), and the hemispheric coding theory (which says that only the average response of the cells in the two hemispheres of the brain are needed),” Goodman said. “What we’ve shown in this study is that neither of these theories can be right, and that the evidence they presented only works because their experiments used unnatural/idealized sounds. If you use more realistic, natural sounds, then they both do very badly at explaining the data.”
Researchers showed that to do well with realistic sounds, one needs to use the whole pattern of neural responses, not just the most strongly responding or average response. They showed two other key things: first, it has long been known that the responses of different auditory neurons are very diverse, but this diversity was not used in the hemispheric coding theory.
“We showed that the diversity is essential to the brain’s ability to localize sounds; if you make all the responses similar then there isn’t enough information, something that was not appreciated before because if one has unnatural/idealized sounds you don’t see the difference” Goodman said.
Second, previous theories are inconsistent with the well-known fact that people are still able to localize sounds if they lose one half of our brain, but only sounds on the other side (i.e. if one loses the left half of the brain, he or she can still localize sounds coming from the right), he added.
“We can explain why this is the case with our new theory,” Goodman said.
(Source: masseyeandear.org)

We live in a world of sounds, full of beautiful music, birds chirping, and the voices of our friends. It’s a rich cacophony, with blaring beeps, accented alarms, and knock-knock jokes. The sound of a door opening can alert us to a friend’s arrival, and a door slamming can alert us to an impending argument.
HEARBO (HEAR-ing roBOt) is a robot developed at Honda Research Institute–Japan (HRI-JP), and its job is to understand this world of sound, in a field called Computational Auditory Scene Analysis.
Super-sensory hearing?
The discovery of a previously unidentified hearing organ in the South American bushcrickets’ ear could pave the way for technological advancements in bio-inspired acoustic sensors research, including medical imaging and hearing aid development.
Researchers from the University of Bristol and University of Lincoln discovered the missing piece of the jigsaw in the understanding of the process of energy transformation in the ‘unconventional’ ears of the bushcrickets (or katydids).
Bushcrickets have four tympana (or ear drums) – two on each foreleg; but until now it has been unknown how the various organs connect in order for the insect to hear. As the tympana (a membrane which vibrates in reaction to sound) does not directly connect with the mechanoreceptors (sensory receptors), it was a mystery how sound was transmitted from air to the mechano-sensory cells.
Sponsored by the Human Frontiers Science Program (HFSP), the research was developed in the lab of Professor Daniel Robert, a Royal Society Fellow at the University of Bristol. Dr Fernando Montealegre-Z, who is now at the University of Lincoln’s School of Life Sciences, discovered a newly identified organ while carrying out research into how the bushcricket tubing system in the ear transports sound. The research focussed on the bushcricket Copiphora gorgonensis, a neotropical species from the National Park Gorgona in Colombia, an island in the Pacific. Results suggest that the bushcricket ear operates in a manner analogous to that of mammals. A paper detailing this remarkable new breakthrough is published in the journal, Science.
Activating the ‘mind’s eye’ — sounds, instead of eyesight, can be alternative vision
Common wisdom has it that if the visual cortex in the brain is deprived of visual information in early infanthood, it may never develop properly its functional specialization, making sight restoration later in life almost impossible.
Scientists at the Hebrew University of Jerusalem and in France have now shown that blind people – using specialized photographic and sound equipment – can actually “see” and describe objects and even identify letters and words.
The new study by a team of researchers, led by Prof. Amir Amedi of the Edmond and Lily Safra Center for Brain Sciences and the Institute for Medical Research Israel-Canada at the Hebrew University and Ph.D. candidate Ella Striem-Amit, has demonstrated how this achievement is possible through the use of a unique training paradigm, using sensory substitution devices (SSDs).
SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones.
The images are converted into “soundscapes,” using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the world-agreed criterion of the World Health Organization (WHO) for blindness, as published in a previous study by the same group.
Music in Our Ears: The Biological Bases of Musical Timbre Perception
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound’s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.
Why crying babies are so hard to ignore: Study suggests the sound of a baby crying activates primitive parts of the brain involved in fight-or-flight responses
Ever wondered why it is so difficult to ignore the sound of a crying baby when you are trapped aboard a train or aeroplane? Scientists have found that our brains are hard-wired to respond strongly to the sound, making us more attentive and priming our bodies to help whenever we hear it – even if we’re not the baby’s parents.
"The sound of a baby cry captures your attention in a way that few other sounds in the environment generally do," said Katie Young of the University of Oxford, who led the study looking at how the brain processes a baby’s cries.
She scanned the brains of 28 people while they listened to the sound of babies and adults crying and sounds of animal distress including cats meowing and dogs whining.
Using a very fast scanning technique, called magnetoencephalography, Young found an early burst of activity in the brain in response to the sound of a baby cry, followed by an intense reaction after about 100 milliseconds. The reaction to other sounds was not as intense. “This was primarily in two regions of the brain,” said Young. “One is the middle temporal gyrus, an area previously implicated in emotional processing and speech; the other area is the orbitofrontal cortex, an area well-known for its role in reward and emotion processing.”
Young and her colleague, Christine Parsons, presented their findings this week at the annual meeting of the Society for Neuroscience in New Orleans.
(Source: exploratorium.edu)