Neuroscience

Articles and news from the latest research reports.

Posts tagged music

35 notes

Neuroscientists Launch 5 Year Study of Music Education and Child Brain Development

Researchers at USC Brain and Creativity Institute will explore the effects of intense music training on cognitive development in LA Phil’s YOLA at HOLA program.
The Los Angeles Philharmonic Association, the USC Brain and Creativity Institute and Heart of Los Angeles (HOLA) are delighted to announce a longitudinal research collaboration to investigate the emotional, social and cognitive effects of musical training on childhood brain development.
The five-year research project, Effects of Early Childhood Musical Training on Brain and Cognitive Development, will offer USC researchers an important opportunity to provide new insights and add rigorous data to an emerging discussion about the role of early music engagement in learning and brain function.
Through a collaboration with the Youth Orchestra Los Angeles at Heart of Los Angeles (YOLA at HOLA) program, a partnership between the LA Phil and HOLA which provides free instruments and musical training to children from the Rampart District of Los Angeles, researchers with the USC Brain and Creativity Institute — led by acclaimed neuroscientists Hanna Damasio and Antonio Damasio – will track how children respond to music from the very onset of their exposure to systematic, high intensity music education.

Neuroscientists Launch 5 Year Study of Music Education and Child Brain Development

Researchers at USC Brain and Creativity Institute will explore the effects of intense music training on cognitive development in LA Phil’s YOLA at HOLA program.

The Los Angeles Philharmonic Association, the USC Brain and Creativity Institute and Heart of Los Angeles (HOLA) are delighted to announce a longitudinal research collaboration to investigate the emotional, social and cognitive effects of musical training on childhood brain development.

The five-year research project, Effects of Early Childhood Musical Training on Brain and Cognitive Development, will offer USC researchers an important opportunity to provide new insights and add rigorous data to an emerging discussion about the role of early music engagement in learning and brain function.

Through a collaboration with the Youth Orchestra Los Angeles at Heart of Los Angeles (YOLA at HOLA) program, a partnership between the LA Phil and HOLA which provides free instruments and musical training to children from the Rampart District of Los Angeles, researchers with the USC Brain and Creativity Institute — led by acclaimed neuroscientists Hanna Damasio and Antonio Damasio – will track how children respond to music from the very onset of their exposure to systematic, high intensity music education.

Filed under brain brain development children music neuroscience psychology research science

97 notes


Struggling to Reconcile Conflicting Beliefs? Listen to Some Mozart
Countless claims have been made regarding the music of Mozart. Studies have suggested it can relieve depression, decrease pain, and even spark an increase in certain types of intelligence. One recent paper found it even increased heart transplant survival in mice.
Two researchers have identified another benefit. They provide preliminary evidence that listening to Mozart can help us cope with cognitive dissonance—that intense feeling of discomfort that arises when we realize two of our core beliefs are at odds.
The ability to recognize and accept the unpleasant reality that our convictions sometimes conflict is a key sign of emotional maturity. Without it, our instinct is to devalue, or refuse to believe, the information that makes us uncomfortable.
One example: If climate change requires collective action, and your instinct is to prize individual liberty, you can quell any cognitive dissonance by simply refusing to believe global warming is real.

Read more

Struggling to Reconcile Conflicting Beliefs? Listen to Some Mozart

Countless claims have been made regarding the music of Mozart. Studies have suggested it can relieve depression, decrease pain, and even spark an increase in certain types of intelligence. One recent paper found it even increased heart transplant survival in mice.

Two researchers have identified another benefit. They provide preliminary evidence that listening to Mozart can help us cope with cognitive dissonance—that intense feeling of discomfort that arises when we realize two of our core beliefs are at odds.

The ability to recognize and accept the unpleasant reality that our convictions sometimes conflict is a key sign of emotional maturity. Without it, our instinct is to devalue, or refuse to believe, the information that makes us uncomfortable.

One example: If climate change requires collective action, and your instinct is to prize individual liberty, you can quell any cognitive dissonance by simply refusing to believe global warming is real.

Read more

Filed under Mozart brain cognitive dissonance music neuroscience psychology science

1,060 notes

15 Studied Effects of Classical Music on Your Brain
Classical music, whether you love it or hate it, has been a powerful cultural force for centuries. While it no longer dominates the music scene, the argument for continued appreciation of the genre goes far beyond pure aural aesthetics. Classical music has been lauded for its ability to do everything from improve intelligence to reduce stress, and despite some exaggeration of its benefits, science shows us that it actually does have a marked effect on the brain in a number of positive ways.
With September being Classical Music Month, there’s no better time to learn a bit more about some of the many ways classical music affects the brain. Over the past few decades, there have been numerous studies on the brain’s reaction to classical music, and we’ve shared the most relevant, interesting, and surprising here, some of which may motivate you to become a classical aficionado yourself.

15 Studied Effects of Classical Music on Your Brain

Classical music, whether you love it or hate it, has been a powerful cultural force for centuries. While it no longer dominates the music scene, the argument for continued appreciation of the genre goes far beyond pure aural aesthetics. Classical music has been lauded for its ability to do everything from improve intelligence to reduce stress, and despite some exaggeration of its benefits, science shows us that it actually does have a marked effect on the brain in a number of positive ways.

With September being Classical Music Month, there’s no better time to learn a bit more about some of the many ways classical music affects the brain. Over the past few decades, there have been numerous studies on the brain’s reaction to classical music, and we’ve shared the most relevant, interesting, and surprising here, some of which may motivate you to become a classical aficionado yourself.

Filed under brain music classical music neuroscience psychology science

219 notes

Concordia student collaborates with Australian neuroscientist to create music based on raw emotions
What does anger sound like? What music does sorrow imply? Human emotion is being given a new soundtrack thanks to an exciting new collaboration between art and neuroscience.
Concordia University researcher Erin Gee is taking feelings to a new level by tapping directly into the human brain, delivering music powered purely by the human body and its emotions. Using data collected from physiological displays of emotion, Gee is creating a software and hardware system that incorporates a set of experimental musical instruments that will perform a symphony of sentiments.
This research could have significant therapeutic benefits for those who have difficulty expressing emotion. Individuals with autism disorders, for example, often struggle to understand the emotions of others. Gee’s robotic technology could be used to teach them how to identify feelings by externalising and exaggerating them into such forms as music.

Concordia student collaborates with Australian neuroscientist to create music based on raw emotions

What does anger sound like? What music does sorrow imply? Human emotion is being given a new soundtrack thanks to an exciting new collaboration between art and neuroscience.

Concordia University researcher Erin Gee is taking feelings to a new level by tapping directly into the human brain, delivering music powered purely by the human body and its emotions. Using data collected from physiological displays of emotion, Gee is creating a software and hardware system that incorporates a set of experimental musical instruments that will perform a symphony of sentiments.

This research could have significant therapeutic benefits for those who have difficulty expressing emotion. Individuals with autism disorders, for example, often struggle to understand the emotions of others. Gee’s robotic technology could be used to teach them how to identify feelings by externalising and exaggerating them into such forms as music.

Filed under brain emotion music technology neuroscience psychology robotics science

57 notes

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) project
The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).
Research questions
How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
How can we control such traits in a piece of music in order to induce specific affective states in a participant? 
How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
 How can we build an intelligent adaptive system for monitoring and inducing affective states through music on-line?

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) project

The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).

Research questions

  • How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
  • How can we control such traits in a piece of music in order to induce specific affective states in a participant?
  • How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
  • How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
  • How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
  • How can we build an intelligent adaptive system for monitoring and inducing affective states through music on-line?

(Source: cmr.soc.plymouth.ac.uk)

Filed under BCMI EEG brain brain activity mood music technology neuroscience science

111 notes

Theory: Music underlies language acquisition

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.

“Infants listen first to sounds of language and only later to its meaning,” Brandt said. He noted that newborns’ extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language – “the most musical aspects of speech.”

The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.

The authors define music as “creative play with sound.” They said the term “music” implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as “an intentional and often repetitive vocal performance,” Brandt said. “They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later.”

Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. “We show that music and language develop along similar time lines,” he said.

Infants initially don’t distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don’t distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.

The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed — a feature of musical hearing that has often been overlooked, Brandt said.

“You can’t distinguish between a piano and a trumpet if you can’t process what you’re hearing at the same speed that you listen for the difference between ‘ba’ and ‘da,’” he said. “In this and many other ways, listening to music and speech overlap.” The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.

“While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child’s view,” Brandt said. “We conclude that music merits a central place in our understanding of human development.”

Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. “A lot of people with language deficits also have musical deficits,” Brandt said.

More research could also shed light on rehabilitation for people who have suffered a stroke. “Music helps them reacquire language, because that may be how they acquired language in the first place,” Brandt said.

(Source: news.rice.edu)

Filed under brain music language acquisition language neuroscience psychology science

158 notes

Turn your dreams into music
Computer scientists in Finland have developed a method that automatically composes music out of sleep measurements. The composition service works live on the Web at sleepmusicalization.net
Developed under Hannu Toivonen, Professor of Computer Science at the University of Helsinki, Finland, the software automatically composes synthetic music using data related to a person’s own sleep as input. The composition program is the work of Aurora Tulilaulu, a student of Professor Toivonen.
"The software composes a unique piece based on the stages of sleep, movement, heart rate and breathing. It compresses a night’s sleep into a couple of minutes," she describes.

Turn your dreams into music

Computer scientists in Finland have developed a method that automatically composes music out of sleep measurements. The composition service works live on the Web at sleepmusicalization.net

Developed under Hannu Toivonen, Professor of Computer Science at the University of Helsinki, Finland, the software automatically composes synthetic music using data related to a person’s own sleep as input. The composition program is the work of Aurora Tulilaulu, a student of Professor Toivonen.

"The software composes a unique piece based on the stages of sleep, movement, heart rate and breathing. It compresses a night’s sleep into a couple of minutes," she describes.

Filed under brain sleep music neuroscience psychology sleep musicalization science

44 notes

Tuning a piano also tunes the brain, say researchers who have seen structural changes within the brains of professional piano tuners.
Researchers at University College London and Newcastle University found listening to two notes played simultaneously makes the brain adapt. Brain scans revealed highly specific changes in the hippocampus, which governs memory and navigation. These correlated with the number of years tuners had been doing this job.
The Wellcome Trust researchers used magnetic resonance imaging to compare the brains of 19 professional piano tuners - who play two notes simultaneously to make them pitch-perfect - and 19 other people. What they saw was highly specific changes in both the grey matter - the nerve cells where information processing takes place - and the white matter - the nerve connections - within the brains of the piano tuners.
Investigator Sundeep Teki said: “We already know that musical training can correlate with structural changes, but our group of professionals offered a rare opportunity to examine the ability of the brain to adapt over time to a very specialised form of listening.”
Other researchers have noted similar hippocampal changes in taxi drivers as they build up detailed information needed to find their way around London’s labyrinth of streets. Prof Tim Griffiths, who led the latest study, published in Neuroscience, said: “There has been little work on the role of the hippocampus in auditory analysis. “Our study is consistent with a form of navigation in pitch space as opposed to the more accepted role in spatial navigation.”

Tuning a piano also tunes the brain, say researchers who have seen structural changes within the brains of professional piano tuners.

Researchers at University College London and Newcastle University found listening to two notes played simultaneously makes the brain adapt. Brain scans revealed highly specific changes in the hippocampus, which governs memory and navigation. These correlated with the number of years tuners had been doing this job.

The Wellcome Trust researchers used magnetic resonance imaging to compare the brains of 19 professional piano tuners - who play two notes simultaneously to make them pitch-perfect - and 19 other people. What they saw was highly specific changes in both the grey matter - the nerve cells where information processing takes place - and the white matter - the nerve connections - within the brains of the piano tuners.

Investigator Sundeep Teki said: “We already know that musical training can correlate with structural changes, but our group of professionals offered a rare opportunity to examine the ability of the brain to adapt over time to a very specialised form of listening.”

Other researchers have noted similar hippocampal changes in taxi drivers as they build up detailed information needed to find their way around London’s labyrinth of streets. Prof Tim Griffiths, who led the latest study, published in Neuroscience, said: “There has been little work on the role of the hippocampus in auditory analysis. “Our study is consistent with a form of navigation in pitch space as opposed to the more accepted role in spatial navigation.”

Filed under brain hippocampus music neuroscience psychology science auditory cortex

free counters