Posts tagged acoustics

Posts tagged acoustics
Researchers uncover why there is a mapping between pitch and elevation
Have you ever wondered why most natural languages invariably use the same spatial attributes – high versus low – to describe auditory pitch? Or why, throughout the history of musical notation, high notes have been represented high on the staff? According to a team of neuroscientists from Bielefeld University, the Max Planck Institute for Biological Cybernetics in Tübingen and the Bernstein Center Tübingen, high pitched sounds feel ‘high’ because, in our daily lives, sounds coming from high elevations are indeed more likely to be higher in pitch. This study has just appeared in the science journal PNAS.
Dr. Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and found that, due to the convoluted shape of the outer ear – the pinna – sounds coming from high positions in space are filtered in such a way that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space.
The results from these three lines of evidence were highly convergent, suggesting that all such diverse phenomena as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation ultimately reflect the adaptation of human hearing to the statistics of natural auditory scenes. ‘These results are especially fascinating, because they do not just explain the origin of the mapping between frequency and elevation,’ says Parise, ‘they also suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide valuable guidelines for using pitch to develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prostheses, and more immersive virtual auditory environments.’
The mapping between pitch and elevation has often been considered to be metaphorical, and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation, such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between naturally occurring sensory signals.
Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as that yellow–reddish colors are associated with a warm temperature or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact reflect statistical regularities to be found in the natural environment.
The Science Behind ‘Beatboxing’
Acoustical analysis reveals the anatomy behind the fascinating array of sounds people can make.
Using the mouth, lips, tongue and voice to generate sounds that one might never expect to come from the human body is the specialty of the artists known as beatboxers. Now scientists have used scanners to peer into a beatboxer as he performed his craft to reveal the secrets of this mysterious art.
The human voice has long been used to generate percussion effects in many cultures, including North American scat singing, Celtic lilting and diddling, and Chinese kouji performances. In southern Indian classical music, konnakol is the percussive speech of the solkattu rhythmic form. In contemporary pop music, the relatively young vocal art form of beatboxing is an element of hip-hop culture.
Until now, the phonetics of these percussion effects were not examined in detail. For instance, it was unknown to what extent beatboxers produced sounds already used within human language.
To learn more about beatboxing, scientists analyzed a 27-year-old male performing in real-time using MRI. This gave researchers “an opportunity to study the sounds people produce in much greater detail than has previously been possible,” said Shrikanth Narayanan, a speech and audio engineer at the University of Southern California in Los Angeles. “The overarching goals of our work drive at larger questions related to the nature of sound production and mental processing in human communication, and a study like this is a small part of the larger puzzle.”
The investigators made 40 recordings each lasting 20-40 seconds long as the beatboxer produced all the effects in his repertoire, as individual sounds, composite beats, rapped lyrics, sung lyrics and freestyle combinations of these elements. He categorized 17 distinct percussion sounds into five instrumental classes — kick drums, rim shots, snare drums, hi-hats, and cymbals. The artist demonstrated his repertoire at several different tempos, ranging from slower at roughly 88 beats per minute, to faster at 104.
"We were astonished by the complex elegance of the vocal movements and the sounds being created in beatboxing, which in itself is an amazing artistic display," Narayanan said. "This incredible vocal instrument and its many capabilities continue to amaze us, from the intricate choreography of the ‘dance of the tongue’ to the complex aerodynamics that work together to create a rich tapestry of sounds that encode not only meaning but also a wide range of emotions."
"It is absolutely amazing that a person can make these sounds — that a person has such control over the timing of various parts of the speech apparatus," said phonetician Donna Erickson at the Showa University of Music and Sophia University, both in Japan, who did not participate in this study. "It is very exciting to see how far technology has come — that we can see these movements in real time. It gives us a much better understanding of how the various parts of our speech anatomy work."