Posts tagged music

Posts tagged music
Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related to tone color and rhythm of the music change over time, and recognizes which piece of music is being listened to. The method also allows pinpointing the areas in the brain that are most crucial for the processing of music. The study was published in the journal NeuroImage.

Using functional magnetic resonance imaging (fMRI), the research team at the Finnish Centre of Excellence in Interdisciplinary Music Research in the Universities of Jyväskylä and Helsinki, and the Center for Functionally Integrative Neuroscience in Aarhus University, Denmark, recorded the brain responses of participants while they were listening to a 16-minute excerpt of the album Abbey Road by the Beatles. Following this, they used computational algorithms to extract a collection of musical features from the musical recording. Subsequently, they employed a collection of machine-learning methods to train a computer model that predicts how the features of the music change over time. Finally, they develop a classifier that predicts which part of the music the participant was listening to at each time.
The researchers found that most of the musical features included in the study could be reliably predicted from the brain data. They also found that the piece being listened to could be predicted significantly better than chance. Fairly large differences were however found between participants in terms of the prediction accuracy. An interesting finding was that areas outside of the auditory cortex, including motor, limbic, and frontal areas, had to be included in the models to obtain reliable predictions, providing thus evidence for the important role of these areas in the processing of musical features.
"We believe that decoding provides a method that complements other existing methods to obtain more reliable information about the complex processing of music in the brain", says Professor Petri Toiviainen from the University of Jyväskylä. "Our results provide additional evidence for the important involvement of emotional and motor areas in music processing."
(Source: jyu.fi)
Children get plenty of benefits from music lessons. Learning to play instruments can fuel their creativity, and practicing can teach much-needed focus and discipline. And the payoff, whether in learning a new song or just mastering a chord, often boosts self-esteem.
But Harvard researchers now say that one oft-cited benefit — that studying music improves intelligence — is a myth.
Though it has been embraced by everyone from advocates for arts education to parents hoping to encourage their kids to stick with piano lessons, a pair of studies conducted by Samuel Mehr, a Harvard Graduate School of Education (HGSE) doctoral student working in the lab of Elizabeth Spelke, the Marshall L. Berkman Professor of Psychology, found that music training had no effect on the cognitive abilities of young children. The studies are described in a Dec. 11 paper published in the open-access journal PLoS One.
“More than 80 percent of American adults think that music improves children’s grades or intelligence,” Mehr said. “Even in the scientific community, there’s a general belief that music is important for these extrinsic reasons. But there is very little evidence supporting the idea that music classes enhance children’s cognitive development.”
The notion that music training can make someone smarter, Mehr said, can largely be traced to a single study published in Nature. In it, researchers identified what they called the “Mozart effect.” After listening to music, test subjects performed better on spatial tasks.
Though the study was later debunked, the notion that simply listening to music could make someone smarter became firmly embedded in the public imagination, and spurred a host of follow-up studies, including several that focused on the cognitive benefits of music lessons.
Though dozens of studies have explored whether and how music and cognitive skills might be connected, when Mehr and colleagues reviewed the literature they found only five studies that used randomized trials, the gold standard for determining causal effects of educational interventions on child development. Of the five, only one showed an unambiguously positive effect, and it was so small — just a 2.7 point increase in IQ after a year of music lessons — that it was barely enough to be statistically significant.
“The experimental work on this question is very much in its infancy, but the few published studies on the topic show little evidence for ‘music makes you smarter,’” Mehr said.
To explore the connection between music and cognition, Mehr and his colleagues recruited 29 parents and 4-year-old children from the Cambridge area. After initial vocabulary tests for the children and music aptitude tests for the parents, each was randomly assigned to one of two classes, one that had music training, or another that focused on visual arts.
“We wanted to test the effects of the type of music education that actually happens in the real world, and we wanted to study the effect in young children, so we implemented a parent-child music enrichment program with preschoolers,” Mehr said. “The goal is to encourage musical play between parents and children in a classroom environment, which gives parents a strong repertoire of musical activities they can continue to use at home with their kids.”
Among the key changes Mehr and his colleagues made from earlier studies were controlling for the effect of different teachers — Mehr taught both the music and visual arts classes — and using assessment tools designed to test areas of cognition, vocabulary, mathematics, and two spatial tasks.
“Instead of using something general, like an IQ test, we tested four specific domains of cognition,” Mehr said. “If there really is an effect of music training on children’s cognition, we should be able to better detect it here than in previous studies, because these tests are more sensitive than tests of general intelligence.”
The study’s results, however, showed no evidence for cognitive benefits of music training.
While the groups performed comparably on vocabulary and number-estimation tasks, the assessments showed that children who received music training performed slightly better at one spatial task, while those who received visual arts training performed better at the other.
“Study One was very small. We only had 15 children in the music group, and 14 in the visual arts,” Mehr said. “The effects were tiny, and their statistical significance was marginal at best. So we attempted to replicate the study, something that hasn’t been done in any of the previous work.”
To replicate the effect, Mehr and colleagues designed a second study that recruited 45 parents and children, half of whom received music training, and half of whom received no training.
Just as in the first study, Mehr said, there was no evidence that music training offered any cognitive benefit. Even when the results of both studies were pooled to allow researchers to compare the effect of music training, visual arts training, and no training, there was no sign that any group outperformed the others.
“There were slight differences in performance between the groups, but none were large enough to be statistically significant,” Mehr said. “Even when we used the finest-grained statistical analyses available to us, the effects just weren’t there.”
While the results suggest studying music may not be a shortcut to educational success, Mehr said there is still substantial value in music education.
“There’s a compelling case to be made for teaching music that has nothing to do with extrinsic benefits,” he said. “We don’t teach kids Shakespeare because we think it will help them do better on the SATs. We do it because we believe Shakespeare is important.
“Music is an ancient, uniquely human activity. The oldest flutes that have been dug up are 40,000 years old, and human song long preceded that,” he said. “Every single culture in the world has music, including music for children. Music says something about what it means to be human, and it would be crazy not to teach this to our children.”

Music brings memories back to the brain injured
In the first study of its kind, two researchers have used popular music to help severely brain-injured patients recall personal memories. Amee Baird and Séverine Samson outline the results and conclusions of their pioneering research in the recent issue of the journal Neuropsychological Rehabilitation.
Although their study covered a small number of cases, it’s the very first to examine ‘music-evoked autobiographical memories’ (MEAMs) in patients with acquired brain injuries (ABIs), rather than those who are healthy or suffer from Alzheimer’s Disease.
In their study, Baird and Samson played extracts from ‘Billboard Hot 100’ number-one songs in random order to five patients. The songs, taken from the whole of the patient’s lifespan from age five, were also played to five control subjects with no brain injury. All were asked to record how familiar they were with a given song, whether they liked it, and what memories it invoked.
Doctors Baird and Samson found that the frequency of recorded MEAMs was similar for patients (38%–71%) and controls (48%–71%). Only one of the four ABI patients recorded no MEAMs. In fact, the highest number of MEAMs in the whole group was recorded by one of the ABI patients. In all those studied, the majority of MEAMs were of a person, people or a life period and were typically positive. Songs that evoked a memory were noted as more familiar and more liked than those that did not.
As a potential tool for helping patients regain their memories, Baird and Samson conclude that: “Music was more efficient at evoking autobiographical memories than verbal prompts of the Autobiographical Memory Interview (AMI) across each life period, with a higher percentage of MEAMs for each life period compared with AMI scores.”
“The findings suggest that music is an effective stimulus for eliciting autobiographical memories and may be beneficial in the rehabilitation of autobiographical amnesia, but only in patients without a fundamental deficit in autobiographical recall memory and intact pitch perception.”
The authors hope that their ground-breaking work will encourage others to carry out further studies on MEAMs in larger ABI populations. They also call for further studies of both healthy people and those with other neurological conditions to learn more about the clear relationship between memory, music and emotion; they hope that one day we might truly “understand the mechanisms underlying the unique memory enhancing effect of music”.
New findings show that extensive musical training affects the structure and function of different brain regions, how those regions communicate during the creation of music, and how the brain interprets and integrates sensory information. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
These insights suggest potential new roles for musical training including fostering plasticity in the brain, an alternative tool in education, and treating a range of learning disabilities.
Today’s new findings show that:
Some of the brain changes that occur with musical training reflect the automation of task (much as one would recite a multiplication table) and the acquisition of highly specific sensorimotor and cognitive skills required for various aspects of musical expertise.
“Playing a musical instrument is a multisensory and motor experience that creates emotions and motions — from finger tapping to dancing — and engages pleasure and reward systems in the brain. It has the potential to change brain function and structure when done over a long period of time,” said press conference moderator Gottfried Schlaug, MD, PhD, of Harvard Medical School/Beth Israel Deaconess Medical Center, an expert on music, neuroimaging and brain plasticity. “As today’s findings show, intense musical training generates new processes within the brain, at different stages of life, and with a range of impacts on creativity, cognition, and learning.”
Monkeys “understand” rules underlying language musicality
Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.
Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)
Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.
Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.
Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.
"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".

Repetition in Music Pulls Us In and Pulls Us Together
In On Repeat: How Music Plays the Mind, Elizabeth Hellmuth Margulis of the University of Arkansas explores the psychology of repetition in music, across time, style and cultures. Hers is the first in-depth study of repetitiveness in music, which she calls “at once entirely ordinary and entirely mysterious” and “so common as to seem almost invisible.”
Repetition in music can be a motif repeated throughout a composition or a favorite song played again and again. It can be the annoying earworm burrowed into the brain that just won’t go away.
Music, she writes, “is a fundamentally human capacity, present in all known cultures, and important to intellectual, emotional and social experience.” And repetition is a key element in music, one that both pulls us into the experience and pulls us together as people.
In her research, Margulis drew on a range of disciplines, including music theory, psycholinguistics, neuroscience and cognitive psychology, to examine how listeners perceive and respond to repetition. She worked with ethnomusicologists to understand the place of music and its repetitive features in cultures around the world.
On Repeat is published by Oxford University Press. The Kindle version is available already, and the hardback publication will ship on Nov. 11, 2013.
A repeated musical motif can build pleasurable expectations in the listener, pulling them into the experience of the piece of music.
“Repetition makes it possible for us to experience a sense of expanded present, characterized not by the explicit knowledge that x will occur at time point y, but rather a déjà-vu-like sense of orientation and involvement,” Margulis writes.
Through repeated playing, a work of music develops an important social and biological role in creating cohesion between individuals and groups. Margulis points to children in nursery school singing a cleanup song each day or adults singing Auld Lang Syne at midnight on New Year’s Eve.
“Repeatability is how songs come to be the property of a group or a community instead of an individual,” she writes, “how they come to belong to a tradition, rather than to a moment.”
On Repeat offers new insights into the relationship between music and language, the nature of musical pleasure and the cognitive science of repetition in music. While the book will be useful to scholars and students, it is written for specialist and non-specialist alike.
Just a few years of early musical training benefits the brain later in life
Older adults who took music lessons as children but haven’t actively played an instrument in decades have a faster brain response to a speech sound than individuals who never played an instrument, according to a study appearing November 6 in the Journal of Neuroscience. The finding suggests early musical training has a lasting, positive effect on how the brain processes sound.
As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. However, previous studies show such age-related declines are not inevitable: recent studies of musicians suggest lifelong musical training may offset these and other cognitive declines.
In the current study, Nina Kraus, PhD, and others at Northwestern University explored whether limited musical training early in life is associated with changes in the way the brain responds to sound decades later. They found that the more years study participants spent playing instruments as youth, the faster their brains responded to a speech sound.
"This study suggests the importance of music education for children today and for healthy aging decades from now," Kraus said. "The fact that musical training in childhood affected the timing of the response to speech in older adults in our study is especially telling because neural timing is the first to go in the aging adult," she added.
For the study, 44 healthy adults, ages 55-76, listened to a synthesized speech syllable (“da”) while researchers measured electrical activity in the auditory brainstem. This region of the brain processes sound and is a hub for cognitive, sensory, and reward information. The researchers discovered that, despite none of the study participants having played an instrument in nearly 40 years, the participants who completed 4-14 years of music training early in life had the fastest response to the speech sound (on the order of a millisecond faster than those without music training).
"Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults," explained Michael Kilgard, PhD, who studies how the brain processes sound at the University of Texas at Dallas and was not involved in this study. "These findings confirm that the investments that we make in our brains early in life continue to pay dividends years later," he added.
(Image: Shutterstock)
How and when the auditory system registers complex auditory-visual synchrony
Imagine the brain’s delight when experiencing the sounds of Beethoven’s “Moonlight Sonata” while simultaneously taking in a light show produced by a visualizer.
A new Northwestern University study did much more than that.
To understand how the brain responds to highly complex auditory-visual stimuli like music and moving images, the study tracked parts of the auditory system involved in the perceptual processing of “Moonlight Sonata” while it was synchronized with the light show made by the iTunes Jelly visualizer.
The study shows how and when the auditory system encodes auditory-visual synchrony between complex and changing sounds and images.
Much of related research looks at how the brain processes simple sounds and images. Locating a woodpecker in a tree, for example, is made easier when your brain combines the auditory (pecking) and visual (movement of the bird) streams and judges that they are synchronous. If they are, the brain decides that the two sensory inputs probably came from a single source.
While that research is important, Julia Mossbridge, lead author of the study and research associate in psychology at Northwestern, said it also is critical to expand investigations to highly complex stimuli like music and movies.
“These kinds of things are closer to what the brain actually has to manage to process in every moment of the day,” she said. “Further, it’s important to determine how and when sensory systems choose to combine stimuli across their boundaries.
“If someone’s brain is mis-wired, sensory information could combine when it’s not appropriate,” she said. “For example, when that person is listening to a teacher talk while looking out a window at kids playing, and the auditory and visual streams are integrated instead of separated, this could result in confusion and misunderstanding about which sensory inputs go with what experience.”
It was already known that the left auditory cortex is specialized to process sounds with precise, complex and rapid timing; this gift for auditory timing may be one reason that in most people, the left auditory cortex is used to process speech, for which timing is critical. The results of this study show that this specialization for timing applies not just to sounds, but to the timing of complex and dynamic sounds and images.
Previous research indicates that there are multi-sensory areas in the brain that link sounds and images when they change in similar ways, but much of this research is focused particularly on speech signals (e.g., lips moving as vowels and consonants are heard). Consequently, it hasn’t been clear what areas of the brain process more general auditory-visual synchrony or how this processing differs when sounds and images should not be combined.
“It appears that the brain is exploiting the left auditory cortex’s gift at processing auditory timing, and is using similar mechanisms to encode auditory-visual synchrony, but only in certain situations; seemingly only when combining the sounds and images is appropriate,” Mossbridge said.
Musicians have sharper minds are able to pick up mistakes and fix them quicker than the rest of us, according to new research.
The study, by researchers at the University of St Andrews, suggests that musical activity could protect against decline in mental abilities through age or illness.
The work, published in the journal Neuropsychologia, extends previous findings that mental abilities are positively related to musical skills. The researchers say that the latest findings demonstrate the potential for ‘far reaching benefits’ of musical activity on mental and physical well-being.
The study was led by St Andrews psychologist Dr Ines Jentzsch, who compared the cognitive ability of amateur musicians versus non-musicians in performing simple mental tasks.
The most striking difference she found lay in the musicians’ ability to recognise and correct mistakes. Not only that, but they responded faster than those with little or no musical training, with no loss in accuracy. This is perhaps not surprising since musicians learn to be constantly aware of their performance, but to not be overly affected by mistakes.
Dr Jentzsch, a Reader in the University’s School of Psychology and Neuroscience, commented, “Our study shows that even moderate levels of musical activity can benefit brain functioning.
“Our findings could have important implications as the processes involved are amongst the first to be affected by aging, as well as a number of mental illnesses such as depression. The research suggests that musical activity could be used as an effective intervention to slow, stop or even reverse age- or illness-related decline in mental functioning.”
The study compared groups of amateur musicians with varying levels of time they had spent in practicing their instrument to a non-musician control group. They then measured each group’s behavioural and brain responses to simple mental tests.
The results showed that playing a musical instrument, even at moderate levels, improves the ability to monitor our behavior for errors and adjust subsequent responses more effectively when needed.
Dr Jentzsch, herself a keen pianist, continued, “Musical activity cannot only immensely enrich our lives but the associated benefits for our physical and mental functioning could be even more far-reaching than proposed in our and previous research.
“Music plays an important role in virtually all societies. Nevertheless, in times of economic hardship, funds for music education are often amongst the first to be cut.
“We strongly encourage political decision makers to reconsider funding cuts for arts education and to increase public spending for music tuition.
“In addition, adults who have never played an instrument or felt too old to learn should be encouraged to take up music - it’s never too late.”
![Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."](http://40.media.tumblr.com/9065dbf8f11ead256e3e5a55b5d20b2e/tumblr_mtqn9mjEW61rog5d1o1_400.jpg)
Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."