Neuroscience

Articles and news from the latest research reports.

Posts tagged neural activity

51 notes

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning 
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning

Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Filed under neural activity ERPs N400 effect language language comprehension psychology neuroscience science

78 notes

Ultrasensitive Calcium Sensors Shine New Light on Neuron Activity
A new protein engineered by scientists at the Janelia Farm Research Campus fluoresces brightly each time it senses calcium, giving the scientists a way to visualize neuronal activity. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse.
Every time you say a word, take a step, or read a sentence, a collection of neurons sends a speedy relay of messages throughout your brain to process the information. Now, researchers have a new way of watching those messages in action, by watching each cell in the chain light up when it fires.
When a neuron receives a signal from one of its neighbors, the impulse sets off a sudden series of electrochemical events geared toward passing the message along. Among the first events: calcium ions rush into the neurons when a set of channels opens. Scientists at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have engineered a new protein that brightly fluoresces each time it senses these calcium waves, giving the scientists a way to visualize the activity of every neuron throughout the brain. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse, rather than just a portion. The results are reported in the July 18, 2013 issue of the journal Nature.
“You can think of the brain as an orchestra with each different neuron type playing a different part,” says Janelia lab head Karel Svoboda, a neurobiologist and member of the team that developed the new sensor. “Previous methods only let us hear a tiny fraction of the melodies. Now we can hear more of the symphony at once. Improving the molecule and imaging methods in the future could allow us to hear the entire symphony.”
Detecting which neurons in the brain are firing, and when, is a key step in learning which areas of the brain are linked to particular activities or disorders, how memories are formed, how behaviors are learned, and basic questions about how the brain organizes neurons and stores information in this organization. 
Two decades ago, scientists who wanted to use calcium to pinpoint neural activity relied on synthetic calcium-indicator dyes, first developed by HHMI Investigator Roger Tsien. The dyes lit up when neurons fired, but were difficult to inject and highly toxic—an animal’s brain could only be imaged once using the dyes.
In 1997, researchers led by Tsien developed the first genetically encoded calcium indicator (GECI). GECIs were made by combining a gene for a calcium sensor with the gene for a fluorescent protein in a way that made the calcium sensor fluoresce when it bound calcium. The GECI genes could be integrated into the genomes of model organisms like mice or flies so that no dye injection was necessary. The animals’ own brain cells would produce the proteins throughout their lives, and brain activity could be studied again and again in any one animal, allowing long-term studies of processes like learning and development. But GECIs weren’t as accurate as the cumbersome dyes had been, and improving them was a slow process.
“New versions were developed in a very piecemeal way,” says Svoboda, explaining that after chemists developed the sensors, it might be years before biologists had an opportunity to test them in the brains of living animals. “It was a very slow process of getting feedback.”
Svoboda, along with Janelia lab heads Loren Looger, Vivek Jayaraman and Rex Kerr formed the Genetically Encoded Neural Indicator and Effector (GENIE) project at Janelia to speed up the innovation. The GENIE project, led by Douglas Kim, an HHMI program scientist, is one of several collaborative team projects online at Janelia. The project developed a higher-throughput and more accurate way of testing new variants of the best-working GECI, called GCaMP. Steps included simple tests that could easily be performed on many proteins at once, like measuring how much fluorescence the protein gave off when exposed to calcium in a cuvette, as well as early tests of function in different types of neurons and final experiments in genetically engineered mice, flies, and zebrafish.
“When people developed previous GECIs, they would test somewhere between ten and twenty variants very carefully. We were able to screen a thousand in a highly quantitative neuronal assay,” Looger says. “And when you can look at that many constructs, you’re going to make better and more interesting observations on what makes the ideal sensor.”
The team made successive rounds of tweaks to the structure of the GCaMP so that it accurately sensed calcium, shone brightly in response, and worked in model organisms. After that work they settled upon a version of the sensor that performed better in all aspects than previous GECIs. Their new sensor, dubbed GCaMP6, produced signals seven times stronger than past versions. Surprisingly, its sensitivity even outperformed synthetic dyes.
“People had assumed that the synthetic dyes were letting us see every event in neurons,” says Looger. “But we’ve now shown that not only are these dyes hard to load and quite toxic, but they weren’t even recording every event.”
GCaMP6 will be a boon to researchers at Janelia, and around the world, who want to get a full picture of the activity of every neuron in the brain. Meanwhile, the team plans to continue to continue to improve it, developing entirely new versions for specific uses. For example, they hope to make a GECI that gives off red fluorescence rather than green, because red is easier to see in deeper tissues.
“One of the stated goals of Janelia Farm is to develop an atlas of every neuron in the Drosophila brain,” says Looger. “The most practical way I can think of to assign functions to such an atlas is with calcium sensors. With this new sensor, I think people will feel much more comfortable that they’re really getting all the information they can.”

Ultrasensitive Calcium Sensors Shine New Light on Neuron Activity

A new protein engineered by scientists at the Janelia Farm Research Campus fluoresces brightly each time it senses calcium, giving the scientists a way to visualize neuronal activity. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse.

Every time you say a word, take a step, or read a sentence, a collection of neurons sends a speedy relay of messages throughout your brain to process the information. Now, researchers have a new way of watching those messages in action, by watching each cell in the chain light up when it fires.

When a neuron receives a signal from one of its neighbors, the impulse sets off a sudden series of electrochemical events geared toward passing the message along. Among the first events: calcium ions rush into the neurons when a set of channels opens. Scientists at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have engineered a new protein that brightly fluoresces each time it senses these calcium waves, giving the scientists a way to visualize the activity of every neuron throughout the brain. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse, rather than just a portion. The results are reported in the July 18, 2013 issue of the journal Nature.

“You can think of the brain as an orchestra with each different neuron type playing a different part,” says Janelia lab head Karel Svoboda, a neurobiologist and member of the team that developed the new sensor. “Previous methods only let us hear a tiny fraction of the melodies. Now we can hear more of the symphony at once. Improving the molecule and imaging methods in the future could allow us to hear the entire symphony.”

Detecting which neurons in the brain are firing, and when, is a key step in learning which areas of the brain are linked to particular activities or disorders, how memories are formed, how behaviors are learned, and basic questions about how the brain organizes neurons and stores information in this organization.

Two decades ago, scientists who wanted to use calcium to pinpoint neural activity relied on synthetic calcium-indicator dyes, first developed by HHMI Investigator Roger Tsien. The dyes lit up when neurons fired, but were difficult to inject and highly toxic—an animal’s brain could only be imaged once using the dyes.

In 1997, researchers led by Tsien developed the first genetically encoded calcium indicator (GECI). GECIs were made by combining a gene for a calcium sensor with the gene for a fluorescent protein in a way that made the calcium sensor fluoresce when it bound calcium. The GECI genes could be integrated into the genomes of model organisms like mice or flies so that no dye injection was necessary. The animals’ own brain cells would produce the proteins throughout their lives, and brain activity could be studied again and again in any one animal, allowing long-term studies of processes like learning and development. But GECIs weren’t as accurate as the cumbersome dyes had been, and improving them was a slow process.

“New versions were developed in a very piecemeal way,” says Svoboda, explaining that after chemists developed the sensors, it might be years before biologists had an opportunity to test them in the brains of living animals. “It was a very slow process of getting feedback.”

Svoboda, along with Janelia lab heads Loren Looger, Vivek Jayaraman and Rex Kerr formed the Genetically Encoded Neural Indicator and Effector (GENIE) project at Janelia to speed up the innovation. The GENIE project, led by Douglas Kim, an HHMI program scientist, is one of several collaborative team projects online at Janelia. The project developed a higher-throughput and more accurate way of testing new variants of the best-working GECI, called GCaMP. Steps included simple tests that could easily be performed on many proteins at once, like measuring how much fluorescence the protein gave off when exposed to calcium in a cuvette, as well as early tests of function in different types of neurons and final experiments in genetically engineered mice, flies, and zebrafish.

“When people developed previous GECIs, they would test somewhere between ten and twenty variants very carefully. We were able to screen a thousand in a highly quantitative neuronal assay,” Looger says. “And when you can look at that many constructs, you’re going to make better and more interesting observations on what makes the ideal sensor.”

The team made successive rounds of tweaks to the structure of the GCaMP so that it accurately sensed calcium, shone brightly in response, and worked in model organisms. After that work they settled upon a version of the sensor that performed better in all aspects than previous GECIs. Their new sensor, dubbed GCaMP6, produced signals seven times stronger than past versions. Surprisingly, its sensitivity even outperformed synthetic dyes.

“People had assumed that the synthetic dyes were letting us see every event in neurons,” says Looger. “But we’ve now shown that not only are these dyes hard to load and quite toxic, but they weren’t even recording every event.”

GCaMP6 will be a boon to researchers at Janelia, and around the world, who want to get a full picture of the activity of every neuron in the brain. Meanwhile, the team plans to continue to continue to improve it, developing entirely new versions for specific uses. For example, they hope to make a GECI that gives off red fluorescence rather than green, because red is easier to see in deeper tissues.

“One of the stated goals of Janelia Farm is to develop an atlas of every neuron in the Drosophila brain,” says Looger. “The most practical way I can think of to assign functions to such an atlas is with calcium sensors. With this new sensor, I think people will feel much more comfortable that they’re really getting all the information they can.”

Filed under calcium calcium ions brain mapping neurotransmission neural activity neurons neuroscience science

172 notes

Researchers Identify Emotions Based on Brain Activity
For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.
The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.
Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”
“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”
One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.
“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.
For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.
Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.
To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.
Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs.  The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.
Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.
“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.
“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.
The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.
Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”
In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.

Researchers Identify Emotions Based on Brain Activity

For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.

The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.

Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”

“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”

One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.

“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.

For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.

Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.

To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.

Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs.  The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.

Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.

“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.

A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.

“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.

The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.

Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”

In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.

Filed under brain activity emotions machine learning fMRI neural activity neuroscience psychology science

900 notes

Why Music Makes Our Brain Sing
MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.
In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.
So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?
The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.
More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.
When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.
But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.
The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.
To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.
We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.
But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.
Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.
The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.
We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.
These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.
So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.
Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.
In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?
When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Why Music Makes Our Brain Sing

MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.

In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.

So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?

The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.

The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.

To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.

We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.

But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.

Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.

The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.

We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.

These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.

So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.

Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.

In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?

When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Filed under music dopamine emotion reward system neural activity auditory cortex psychology neuroscience science

162 notes

Brain uses internal ‘average voice’ prototype to identify who is talking
The human brain is able to identify individuals’ voices by comparing them against an internal ‘average voice’ prototype, according to neuroscientists.
A study carried out by researchers at the University of Glasgow and reported in the journal Current Biology demonstrates that voice identity is coded in the brain by reference to two internal voice prototypes – one male, one female.
Voices that have the greatest difference from the prototype are perceived as more distinctive and produce greater neural activity than voices deemed very similar.
The researchers in the Institute of Neuroscience & Psychology conducted the study by generating a voice prototype through morphing 32 same-gender voices together resulting in a smooth, idealised voice with few irregularities.
They then generated different voices by altering the ‘distance-to-mean’ of the prototype voice – for example, changing the tone and pitch or morphing two or more voices together.
Using functional Magnetic Resonance Imaging (fMRI), the researchers were able to see increased neural activity the further from the prototype the voices were.
Professor Pascal Belin said: “Like faces, voices can be used to identify a person, yet the neural basis of this ability remains poorly understood. Here we provide the first evidence of a norm-based coding mechanism the brain uses to identify a speaker.
“The research indicates this is a similar process for the identification of faces, where the brain also uses an average face to compare against other faces it encounters in order to establish identity.
“So, rather than having to remember each single voice it hears every day for a lifetime, the brain facilitates the task of identification by remembering only the differences from the prototype it stores.
“It leads to a range of interesting and important questions, such as whether the prototypes are innate, stored templates or whether they are subject to environmental and cultural influences. Could the prototype consist of an average of all voices experiences during one’s life?”
(Image: Shutterstock)

Brain uses internal ‘average voice’ prototype to identify who is talking

The human brain is able to identify individuals’ voices by comparing them against an internal ‘average voice’ prototype, according to neuroscientists.

A study carried out by researchers at the University of Glasgow and reported in the journal Current Biology demonstrates that voice identity is coded in the brain by reference to two internal voice prototypes – one male, one female.

Voices that have the greatest difference from the prototype are perceived as more distinctive and produce greater neural activity than voices deemed very similar.

The researchers in the Institute of Neuroscience & Psychology conducted the study by generating a voice prototype through morphing 32 same-gender voices together resulting in a smooth, idealised voice with few irregularities.

They then generated different voices by altering the ‘distance-to-mean’ of the prototype voice – for example, changing the tone and pitch or morphing two or more voices together.

Using functional Magnetic Resonance Imaging (fMRI), the researchers were able to see increased neural activity the further from the prototype the voices were.

Professor Pascal Belin said: “Like faces, voices can be used to identify a person, yet the neural basis of this ability remains poorly understood. Here we provide the first evidence of a norm-based coding mechanism the brain uses to identify a speaker.

“The research indicates this is a similar process for the identification of faces, where the brain also uses an average face to compare against other faces it encounters in order to establish identity.

“So, rather than having to remember each single voice it hears every day for a lifetime, the brain facilitates the task of identification by remembering only the differences from the prototype it stores.

“It leads to a range of interesting and important questions, such as whether the prototypes are innate, stored templates or whether they are subject to environmental and cultural influences. Could the prototype consist of an average of all voices experiences during one’s life?”

(Image: Shutterstock)

Filed under neural activity prototype voice voices brain auditory cortex fMRI neuroscience science

275 notes

Clouds in the Head: New Model of Brain’s Thought Processes
A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.

Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing in Nature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.
What goes on in the heads of apes
The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.
Chaotic patterns revealed in context
Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.

Clouds in the Head: New Model of Brain’s Thought Processes

A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.

Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing in Nature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.

What goes on in the heads of apes

The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.

Chaotic patterns revealed in context

Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.

Filed under neurons neural activity prefrontal cortex cognitive function primates neuroscience science

127 notes

Temporal Processing in the Olfactory System: Can We See a Smell?
Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

Temporal Processing in the Olfactory System: Can We See a Smell?

Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

Filed under olfactory system neurons neural activity visual system retina odorants neuroscience science

74 notes

Fishing for memories

In our interaction with our environment we constantly refer to past experiences stored as memories to guide behavioral decisions. But how memories are formed, stored and then retrieved to assist decision-making remains a mystery. By observing whole-brain activity in live zebrafish, researchers from the RIKEN Brain Science Institute have visualized for the first time how information stored as long-term memory in the cerebral cortex is processed to guide behavioral choices.

The study, published today in the journal Neuron, was carried out by Dr. Tazu Aoki and Dr. Hitoshi Okamoto from the Laboratory for Developmental Gene Regulation, a pioneer in the study of how the brain controls behavior in zebrafish.

The mammalian brain is too large to observe the whole neural circuit in action. But using a technique called calcium imaging, Aoki et al. were able to visualize for the first time the activity of the whole zebrafish brain during memory retrieval.

Calcium imaging takes advantage of the fact that calcium ions enter neurons upon neural activation. By introducing a calcium sensitive fluorescent substance in the neural tissue, it becomes possible to trace the calcium influx in neurons and thus visualize neural activity.

The researchers trained transgenic zebrafish expressing a calcium sensitive protein to avoid a mild electric shock using a red LED as cue. By observing the zebrafish brain activity upon presentation of the red LED they were able to visualize the process of remembering the learned avoidance behavior.

They observe spot-like neural activity in the dorsal part of the fish telencephalon, which corresponds to the human cortex, upon presentation of the red LED 24 hours after the training session. No activity is observed when the cue is presented 30 minutes after training.

In another experiment, Aoki et al. show that if this region of the brain is removed, the fish are able to learn the avoidance behavior, remember it short-term, but cannot form any long-term memory of it.

“This indicates that short-term and long-term memories are formed and stored in different parts of the brain. We think that short-term memories must be transferred to the cortical region to be consolidated into long-term memories,” explains Dr. Aoki.

The team then tested whether memories for the best behavioral choices can be modified by new learning. The fish were trained to learn two opposite avoidance behaviors, each associated with a different LED color, blue or red, as a cue. They find that presentation of the different cues leads to the activation of different groups of neurons in the telencephalon, which indicates that different behavioral programs are stored and retrieved by different populations of neurons.

“Using calcium imaging on zebrafish, we were able to visualize an on-going process of memory consolidation for the first time. This approach opens new avenues for research into memory using zebrafish as model organism,” concludes Dr. Okamoto.

Filed under zebrafish brain activity neural activity memory formation LTM calcium ions neuroscience science

117 notes

Getting a grip on sleep
All mammals sleep, as do birds and some insects. However, how this basic function is regulated by the brain remains unclear. According to a new study by researchers from the RIKEN Brain Science Institute, a brain region called the lateral habenula plays a central role in the regulation of REM sleep. In an article published today in the Journal of Neuroscience, the team shows that the lateral habenula maintains and regulates REM sleep in rats through regulation of the serotonin system. This study is the first to show a role of the lateral habenula in linking serotonin metabolism and sleep.
The lateral habenula is a region of the brain known to regulate the metabolism of the neurotransmitter serotonin in the brain and to play a key role in cognitive functions.
“Serotonin plays a central role in the pathophysiology of depression, however, it is not clear how abnormalities in regulation of serotonin metabolism in the brain lead to symptoms such as insomnia in depression,” explain Dr. Hidenori Aizawa and Dr. Hitoshi Okamoto who led the study.
Since animals with increased serotonergic activity at the synapse experienced less REM sleep, the researchers hypothesized that the lateral habenula, which regulates serotonergic activity in the brain, must modulate the duration of REM sleep.
They show that removing the lateral habenula in rats results in a reduction of theta rhythm, an oscillatory activity that appears during REM sleep, in the hippocampus, and shortens the rats’ REM sleep periods. However, this inhibitory effect of the lateral habenular lesion on REM sleep disappears when the serotonergic neurons in the midbrain are lesioned.
The team recorded neural activity simultaneously in the lateral habenula and hippocampus in a sleeping rat. They find that the lateral habenular neurons, which fire persistently during non-REM sleep, begin to fire rhythmically in accordance with the theta rhythm in the hippocampus when the animal is in REM sleep.
“Our results indicate that the lateral habenula is essential for maintaining theta rhythms in the hippocampus, which characterize REM sleep in the rat, and that this is done via serotonergic modulation,” concludes Dr Aizawa.
“This study reveals a novel role of the lateral habenula, linking serotonin and REM sleep, which suggests that an hyperactive habenula in patients with depression may cause altered REM sleep,” add the authors.

Getting a grip on sleep

All mammals sleep, as do birds and some insects. However, how this basic function is regulated by the brain remains unclear. According to a new study by researchers from the RIKEN Brain Science Institute, a brain region called the lateral habenula plays a central role in the regulation of REM sleep. In an article published today in the Journal of Neuroscience, the team shows that the lateral habenula maintains and regulates REM sleep in rats through regulation of the serotonin system. This study is the first to show a role of the lateral habenula in linking serotonin metabolism and sleep.

The lateral habenula is a region of the brain known to regulate the metabolism of the neurotransmitter serotonin in the brain and to play a key role in cognitive functions.

“Serotonin plays a central role in the pathophysiology of depression, however, it is not clear how abnormalities in regulation of serotonin metabolism in the brain lead to symptoms such as insomnia in depression,” explain Dr. Hidenori Aizawa and Dr. Hitoshi Okamoto who led the study.

Since animals with increased serotonergic activity at the synapse experienced less REM sleep, the researchers hypothesized that the lateral habenula, which regulates serotonergic activity in the brain, must modulate the duration of REM sleep.

They show that removing the lateral habenula in rats results in a reduction of theta rhythm, an oscillatory activity that appears during REM sleep, in the hippocampus, and shortens the rats’ REM sleep periods. However, this inhibitory effect of the lateral habenular lesion on REM sleep disappears when the serotonergic neurons in the midbrain are lesioned.

The team recorded neural activity simultaneously in the lateral habenula and hippocampus in a sleeping rat. They find that the lateral habenular neurons, which fire persistently during non-REM sleep, begin to fire rhythmically in accordance with the theta rhythm in the hippocampus when the animal is in REM sleep.

“Our results indicate that the lateral habenula is essential for maintaining theta rhythms in the hippocampus, which characterize REM sleep in the rat, and that this is done via serotonergic modulation,” concludes Dr Aizawa.

“This study reveals a novel role of the lateral habenula, linking serotonin and REM sleep, which suggests that an hyperactive habenula in patients with depression may cause altered REM sleep,” add the authors.

Filed under serotonin sleep lateral habenula neural activity hippocampus neuroscience science

86 notes

Research determines how the brain computes tool use

With a goal of helping patients with spinal cord injuries, Jason Gallivan and a team of researchers at Queen’s University’s Department of Psychology and Centre for Neuroscience Studies are probing deep into the human brain to learn how it manages basic daily tasks.

image

The team’s most recent research, in collaboration with a group at Western University, investigated how the human brain supports tool use. The researchers were especially interested in determining the extent to which brain regions involved in planning actions with the hand alone would also be involved in planning actions with a tool. They found that although some brain regions were involved in planning actions with either the hand or tool alone, the vast majority were involved in planning both hand- and tool-related movements. In a subset of these latter brain areas the researchers further determined that the tool was in fact being represented as an extension of the hand.

“Tool use represents a defining characteristic of high-level cognition and behaviour across the animal kingdom but studying how the brain – and the human brain in particular – supports tool use remains a significant challenge for neuroscientists” says Dr. Gallivan. “This work is a considerable step forward in our understanding of how tool-related actions are planned in humans.”

Over the course of one year, human participants had their brain activity scanned using functional magnetic resonance imaging (fMRI) as they reached towards and grasped objects using either their hand or a set of plastic tongs. The tongs had been designed so they opened whenever participants closed their grip, requiring the participants to perform a different set of movements to use the tongs as opposed to when using their hand alone.

The team found that mere seconds before the action began, that the neural activity in some brain regions was predictive of the type of action to be performed upon the object, regardless of whether the hand or tool was to be used (and despite the different movements being required). By contrast, the predictive neural activity in other brain regions was shown to represent hand and tool actions separately. Specifically, some brain regions only coded actions with the hand whereas others only coded actions with the tool.

“Being able to decode desired tool use behaviours from brain signals takes us one step closer to using those signals to control those same types of actions with prosthetic limbs,” says Dr. Gallivan. “This work uncovers the brain organization underlying the planning of movements with the hand and hand-operated tools and this knowledge could help people suffering from spinal cord injuries.”

The research was recently published in eLife.

(Source: queensu.ca)

Filed under tool use spinal cord injuries brain activity neural activity fMRI neuroscience science

free counters