Neuroscience

Articles and news from the latest research reports.

Posts tagged EEG

126 notes

The effects of working memory training on functional brain network efficiency
The human brain is a highly interconnected network. Recent studies have shown that the functional and anatomical features of this network are organized in an efficient small-world manner that confers high efficiency of information processing at relatively low connection cost. However, it has been unclear how the architecture of functional brain networks is related to performance in working memory (WM) tasks and if these networks can be modified by WM training. Therefore, we conducted a double-blind training study enrolling 66 young adults. Half of the subjects practiced three WM tasks and were compared to an active control group practicing three tasks with low WM demand. High-density resting-state electroencephalography (EEG) was recorded before and after training to analyze graph-theoretical functional network characteristics at an intracortical level. WM performance was uniquely correlated with power in the theta frequency, and theta powerwas increased by WM training. Moreover, the better a person’s WM performance, the more their network exhibited small-world topology. WM training shifted network characteristics in the direction of high performers, showing increased small-worldness within a distributed fronto-parietal network. Taken together, this is the first longitudinal study that provides evidence for the plasticity of the functional brain network underlying WM.
Full Article

The effects of working memory training on functional brain network efficiency

The human brain is a highly interconnected network. Recent studies have shown that the functional and anatomical features of this network are organized in an efficient small-world manner that confers high efficiency of information processing at relatively low connection cost. However, it has been unclear how the architecture of functional brain networks is related to performance in working memory (WM) tasks and if these networks can be modified by WM training. Therefore, we conducted a double-blind training study enrolling 66 young adults. Half of the subjects practiced three WM tasks and were compared to an active control group practicing three tasks with low WM demand. High-density resting-state electroencephalography (EEG) was recorded before and after training to analyze graph-theoretical functional network characteristics at an intracortical level. WM performance was uniquely correlated with power in the theta frequency, and theta power
was increased by WM training. Moreover, the better a person’s WM performance, the more their network exhibited small-world topology. WM training shifted network characteristics in the direction of high performers, showing increased small-worldness within a distributed fronto-parietal network. Taken together, this is the first longitudinal study that provides evidence for the plasticity of the functional brain network underlying WM.

Full Article

Filed under working memory functional connectivity EEG performance neuroscience science

76 notes

EEG study: Brain infers structure, rules of tasks
A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.
In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.
Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.
“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”
Read more

EEG study: Brain infers structure, rules of tasks

A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.

In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.

Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.

“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”

Read more

Filed under brain activity frontal cortex EEG learning psychology neuroscience science

301 notes

How the brain recognizes familiar music

Research from McGill University reveals that the brain’s motor network helps people remember and recognize music that they have performed in the past better than music they have only heard. A recent study by Prof. Caroline Palmer of the Department of Psychology sheds new light on how humans perceive and produce sounds, and may pave the way for investigations into whether motor learning could improve or protect memory or cognitive impairment in aging populations. The research is published in the journal Cerebral Cortex.

“The memory benefit that comes from performing a melody rather than just listening to it, or saying a word out loud rather than just hearing or reading it, is known as the ’production effect’ on memory”, says Prof. Palmer, a Canada Research Chair in Cognitive Neuroscience of Performance. “Scientists have debated whether the production effect is due to motor memories, such as knowing the feel of a particular sequence of finger movements on piano keys, or simply due to strengthened auditory memories, such as knowing how the melody tones should sound. Our paper provides new evidence that motor memories play a role in improving listeners’ recognition of tones they have previously performed.”

image

For the study, researchers recruited twenty skilled pianists from Lyon, France. The group was asked to learn simple melodies by either hearing them several times or performing them several times on a piano. Pianists then heard all of the melodies they had learned, some of which contained wrong notes, while their brain electric signals were measured using electroencephalography (EEG). 

“We found that pianists were better at recognizing pitch changes in melodies they had performed earlier,” said the study’s first author, Brian Mathias, a McGill PhD student who conducted the work at the Lyon Neuroscience Research Centre in France with additional collaborators Drs. Barbara Tillmann and Fabien Perrin.

The team found that EEG measurements revealed larger changes in brain waves and increased motor activity for previously performed melodies than for heard melodies about 200 milliseconds after the wrong notes. This reveals that the brain quickly compares incoming auditory information with motor information stored in memory, allowing us to recognize whether a sound is familiar.

“This paper helps us understand ‘experiential learning’, or ‘learning by doing’, and offers pedagogical and clinical implications,” said Mathias, “The role of the motor system in recognizing music, and perhaps also speech, could inform education theory by providing strategies for memory enhancement for students and teachers.”

(Source: mcgill.ca)

Filed under music memory motor learning EEG brainwaves learning neuroscience science

101 notes

Study looks at better prediction for epileptic seizures through adaptive learning approach
A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.
The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.
Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.
“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”
Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.
Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.
Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.
The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.
Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.
“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.
Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.
“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Study looks at better prediction for epileptic seizures through adaptive learning approach

A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.

The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.

Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.

“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”

Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.

Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.

Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.

The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.

Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.

“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.

Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.

“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Filed under epileptic seizure adaptive learning epilepsy EEG medicine technology neuroscience science

224 notes

Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures. 
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for  interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart  – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."

Stanford scientists build a ‘brain stethoscope’ to turn seizures into music

When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.

Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.

He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.

Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.

He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.

Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]

"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."

If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.

Parvizi and Chafe dubbed the device a “brain stethoscope.”

The sound of a seizure

The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.

Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.

In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.

After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.

It’s the perfect representation of the three phases of a seizure event, Parvizi said.

Part art exhibit, part experiment

Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.

To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.

"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.

The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.

The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for  interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.

From a clinical perspective, the work is still very experimental.

"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."

Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.

"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart  – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."

Filed under brainwaves EEG neural activity seizures music brain stethoscope biofeedback neuroscience science

193 notes

Coma: researchers observe never-before- detected brain activity
Researchers from the University of Montreal and their colleagues have found brain activity beyond a flat line EEG, which they have called Nu-complexes (from the Greek letter n). According to existing scientific data, researchers and doctors had established that beyond the so-called “flat line” (flat electroencephalogram or EEG), there is nothing at all, no brain activity, no possibility of life. This major discovery suggests that there is a whole new frontier in animal and human brain functioning.
The researchers observed a human patient in an extreme deep hypoxic coma under powerful anti-epileptic medication that he had been required to take due to his health issues. “Dr. Bogdan Florea from Romania contacted our research team because he had observed unexplainable phenomena on the EEG of a coma patient. We realized that there was cerebral activity, unknown until now, in the patient’s brain,” says Dr. Florin Amzica, director of the study and professor at the University of Montreal’s School of Dentistry.
Dr. Amzica’s team then decided to recreate the patient’s state in cats, the standard animal model for neurological studies. Using the anesthetic isoflurane, they placed the cats in an extremely deep—but completely reversible—coma. The cats passed the flat (isoelectric) EEG line, which is associated with silence in the cortex (the governing part of the brain). The team observed cerebral activity in 100% of the cats in deep coma, in the form of oscillations generated in the hippocampus, the part of the brain responsible for memory and learning processes. These oscillations, unknown until now, were transmitted to the master part of the brain, the cortex. The researchers concluded that the observed EEG waves, or Nu-complexes, were the same as those observed in the human patient.
Dr. Amzica stresses the importance of understanding the implications of these findings. “Those who have decided to or have to ‘unplug’ a near-brain-dead relative needn’t worry or doubt their doctor. The current criteria for diagnosing brain death are extremely stringent. Our finding may perhaps in the long term lead to a redefinition of the criteria, but we are far from that. Moreover, this is not the most important or useful aspect of our study,” Dr. Amzica said.
From Nu-complexesto therapeutic comas
The most useful aspect of this finding is the therapeutic potential, the neuroprotection, of the extreme deep coma. After a major injury, some patients are in such serious condition that doctors deliberately place them in an artificial coma to protect their body and brain so they can recover. But Dr. Amzica believes that the extreme deep coma experimented on the cats may be more protective.
“Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG,” says Professor Amzica. “An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active, through Nu-complexes. is absolutely vital for the benefit of patients.”
“Another implication of this finding is that we now have evidence that the brain is able to survive an extremely deep coma if the integrity of the nervous structures is preserved,” said lead author of the study, Daniel Kroeger. “We also found that the hippocampus can send ‘orders’ to the brain’s commander in chief, the cortex. Finally, the possibility of studying the learning and memory processes of the hippocampus during a state of coma will help further understanding of them. In short, all sorts of avenues for basic research are now open to us.”

Coma: researchers observe never-before- detected brain activity

Researchers from the University of Montreal and their colleagues have found brain activity beyond a flat line EEG, which they have called Nu-complexes (from the Greek letter n). According to existing scientific data, researchers and doctors had established that beyond the so-called “flat line” (flat electroencephalogram or EEG), there is nothing at all, no brain activity, no possibility of life. This major discovery suggests that there is a whole new frontier in animal and human brain functioning.

The researchers observed a human patient in an extreme deep hypoxic coma under powerful anti-epileptic medication that he had been required to take due to his health issues. “Dr. Bogdan Florea from Romania contacted our research team because he had observed unexplainable phenomena on the EEG of a coma patient. We realized that there was cerebral activity, unknown until now, in the patient’s brain,” says Dr. Florin Amzica, director of the study and professor at the University of Montreal’s School of Dentistry.

Dr. Amzica’s team then decided to recreate the patient’s state in cats, the standard animal model for neurological studies. Using the anesthetic isoflurane, they placed the cats in an extremely deep—but completely reversible—coma. The cats passed the flat (isoelectric) EEG line, which is associated with silence in the cortex (the governing part of the brain). The team observed cerebral activity in 100% of the cats in deep coma, in the form of oscillations generated in the hippocampus, the part of the brain responsible for memory and learning processes. These oscillations, unknown until now, were transmitted to the master part of the brain, the cortex. The researchers concluded that the observed EEG waves, or Nu-complexes, were the same as those observed in the human patient.

Dr. Amzica stresses the importance of understanding the implications of these findings. “Those who have decided to or have to ‘unplug’ a near-brain-dead relative needn’t worry or doubt their doctor. The current criteria for diagnosing brain death are extremely stringent. Our finding may perhaps in the long term lead to a redefinition of the criteria, but we are far from that. Moreover, this is not the most important or useful aspect of our study,” Dr. Amzica said.

From Nu-complexesto therapeutic comas

The most useful aspect of this finding is the therapeutic potential, the neuroprotection, of the extreme deep coma. After a major injury, some patients are in such serious condition that doctors deliberately place them in an artificial coma to protect their body and brain so they can recover. But Dr. Amzica believes that the extreme deep coma experimented on the cats may be more protective.

“Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG,” says Professor Amzica. “An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active, through Nu-complexes. is absolutely vital for the benefit of patients.”

“Another implication of this finding is that we now have evidence that the brain is able to survive an extremely deep coma if the integrity of the nervous structures is preserved,” said lead author of the study, Daniel Kroeger. “We also found that the hippocampus can send ‘orders’ to the brain’s commander in chief, the cortex. Finally, the possibility of studying the learning and memory processes of the hippocampus during a state of coma will help further understanding of them. In short, all sorts of avenues for basic research are now open to us.”

Filed under brain activity nu-complexes memory hippocampus EEG coma neuroscience science

1,898 notes

Researcher controls colleague’s motions in 1st human brain-to-brain interface
University of Washington researchers have performed what they believe is the first noninvasive human-to-human brain interface, with one researcher able to send a brain signal via the Internet to control the hand motions of a fellow researcher.
Using electrical brain recordings and a form of magnetic stimulation, Rajesh Rao sent a brain signal to Andrea Stocco on the other side of the UW campus, causing Stocco’s finger to move on a keyboard.
While researchers at Duke University have demonstrated brain-to-brain communication between two rats, and Harvard researchers have demonstrated it between a human and a rat, Rao and Stocco believe this is the first demonstration of human-to-human brain interfacing.
“The Internet was a way to connect computers, and now it can be a way to connect brains,” Stocco said. “We want to take the knowledge of a brain and transmit it directly from brain to brain.”
The researchers captured the full demonstration on video recorded in both labs.
Rao, a UW professor of computer science and engineering, has been working on brain-computer interfacing in his lab for more than 10 years and just published a textbook on the subject. In 2011, spurred by the rapid advances in technology, he believed he could demonstrate the concept of human brain-to-brain interfacing. So he partnered with Stocco, a UW research assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences.
On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement.
The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens.
Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the “fire” button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn’t looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic.
“It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain,” Rao said. “This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains.”
The technologies used by the researchers for recording and stimulating the brain are both well-known. Electroencephalography, or EEG, is routinely used by clinicians and researchers to record brain activity noninvasively from the scalp. Transcranial magnetic stimulation is a noninvasive way of delivering stimulation to the brain to elicit a response. Its effect depends on where the coil is placed; in this case, it was placed directly over the brain region that controls a person’s right hand. By activating these neurons, the stimulation convinced the brain that it needed to move the right hand.
Computer science and engineering undergraduates Matthew Bryan, Bryan Djunaedi, Joseph Wu and Alex Dadgar, along with bioengineering graduate student Dev Sarma, wrote the computer code for the project, translating Rao’s brain signals into a command for Stocco’s brain.
“Brain-computer interface is something people have been talking about for a long, long time,” said Chantel Prat, assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences, and Stocco’s wife and research partner who helped conduct the experiment. “We plugged a brain into the most complex computer anyone has ever studied, and that is another brain.”
At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a “Vulcan mind meld.” But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person’s thoughts. And it doesn’t give anyone the ability to control your actions against your will.
Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.
“I think some people will be unnerved by this because they will overestimate the technology,” Prat said. “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.”
Stocco said years from now the technology could be used, for example, by someone on the ground to help a flight attendant or passenger land an airplane if the pilot becomes incapacitated. Or a person with disabilities could communicate his or her wish, say, for food or water. The brain signals from one person to another would work even if they didn’t speak the same language.
Rao and Stocco next plan to conduct an experiment that would transmit more complex information from one brain to the other. If that works, they then will conduct the experiment on a larger pool of subjects.

Researcher controls colleague’s motions in 1st human brain-to-brain interface

University of Washington researchers have performed what they believe is the first noninvasive human-to-human brain interface, with one researcher able to send a brain signal via the Internet to control the hand motions of a fellow researcher.

Using electrical brain recordings and a form of magnetic stimulation, Rajesh Rao sent a brain signal to Andrea Stocco on the other side of the UW campus, causing Stocco’s finger to move on a keyboard.

While researchers at Duke University have demonstrated brain-to-brain communication between two rats, and Harvard researchers have demonstrated it between a human and a rat, Rao and Stocco believe this is the first demonstration of human-to-human brain interfacing.

“The Internet was a way to connect computers, and now it can be a way to connect brains,” Stocco said. “We want to take the knowledge of a brain and transmit it directly from brain to brain.”

The researchers captured the full demonstration on video recorded in both labs.

Rao, a UW professor of computer science and engineering, has been working on brain-computer interfacing in his lab for more than 10 years and just published a textbook on the subject. In 2011, spurred by the rapid advances in technology, he believed he could demonstrate the concept of human brain-to-brain interfacing. So he partnered with Stocco, a UW research assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences.

On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement.

The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens.

Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the “fire” button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn’t looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic.

“It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain,” Rao said. “This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains.”

The technologies used by the researchers for recording and stimulating the brain are both well-known. Electroencephalography, or EEG, is routinely used by clinicians and researchers to record brain activity noninvasively from the scalp. Transcranial magnetic stimulation is a noninvasive way of delivering stimulation to the brain to elicit a response. Its effect depends on where the coil is placed; in this case, it was placed directly over the brain region that controls a person’s right hand. By activating these neurons, the stimulation convinced the brain that it needed to move the right hand.

Computer science and engineering undergraduates Matthew Bryan, Bryan Djunaedi, Joseph Wu and Alex Dadgar, along with bioengineering graduate student Dev Sarma, wrote the computer code for the project, translating Rao’s brain signals into a command for Stocco’s brain.

“Brain-computer interface is something people have been talking about for a long, long time,” said Chantel Prat, assistant professor in psychology at the UW’s Institute for Learning & Brain Sciences, and Stocco’s wife and research partner who helped conduct the experiment. “We plugged a brain into the most complex computer anyone has ever studied, and that is another brain.”

At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a “Vulcan mind meld.” But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person’s thoughts. And it doesn’t give anyone the ability to control your actions against your will.

Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.

“I think some people will be unnerved by this because they will overestimate the technology,” Prat said. “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.”

Stocco said years from now the technology could be used, for example, by someone on the ground to help a flight attendant or passenger land an airplane if the pilot becomes incapacitated. Or a person with disabilities could communicate his or her wish, say, for food or water. The brain signals from one person to another would work even if they didn’t speak the same language.

Rao and Stocco next plan to conduct an experiment that would transmit more complex information from one brain to the other. If that works, they then will conduct the experiment on a larger pool of subjects.

Filed under brain-to-brain interface transcranial magnetic stimulation EEG neuroscience science

130 notes

Two left feet? Study looks to demystify why we lose our balance
It’s always in front of a million people and feels like eternity. You’re strolling along when suddenly you’ve stumbled—the brain realizes you’re falling, but your muscles aren’t doing anything to stop it.
For a young person, a fall is usually just embarrassing. However, for the elderly, falling can be life threatening. Among the elderly who break a hip, 80 percent die within a year.
University of Michigan researchers believe that the critical window of time between when the brain senses a fall and the muscles respond may help explain why so many older people suffer these serious falls. A better understanding of what happens in the brain and muscles during this lag could go a long way toward prevention.
To that end, researchers at the U-M School of Kinesiology developed a novel way of looking at the electrical response in the brain before and during a fall by using an electroencephalogram.
Findings showed that many areas of the brain sense and respond to a fall, but that happens well before the muscles react. Lead researcher Daniel Ferris likened the study method to recording an orchestra with many microphones and then teasing out the sounds of specific instruments. In the study, researchers measured electrical activity in different regions of the brain.
"We’re using an EEG in a way others don’t, to look at what’s going on inside the brain," said Ferris, a professor in kinesiology. "We were able to determine what parts of the brain first identify when you are losing your balance during walking."
During the study, healthy young subjects with electrodes attached to their scalps walked on a balance beam mounted to a treadmill. When participants lost their balance and went off the beam, they simply continued walking on the moving treadmill, thus avoiding injury.
Ferris and colleagues then used a method called independent components analysis to separate and visualize the electrical activity in different parts of the brain. They found that people sense the start of a fall much better with both feet on the ground. Two grounded feet make it easier to determine where the ground is relative to the body, but people aren’t as sure of their stability on one foot.
The researchers were surprised that so many different parts of the brain activate during a fall, and they didn’t expect the brain to recognize a loss of balance as early as it does.
Future studies comparing the elderly with younger subjects could determine if the elderly sense falls too late, in which case, pharmaceuticals might help them regain their balance. If it’s a simple motor problem such as muscles not responding properly, strengthening exercises could help.
Other experiments under the same grant in the Ferris lab look to separate sensory and motor contributions to brain activity during walking.
The study, “Loss of balance during balance beam walking elicits a broadly distributed theta band electrocortical response,” was published in advance online in the Journal of Neurophysiology.

Two left feet? Study looks to demystify why we lose our balance

It’s always in front of a million people and feels like eternity. You’re strolling along when suddenly you’ve stumbled—the brain realizes you’re falling, but your muscles aren’t doing anything to stop it.

For a young person, a fall is usually just embarrassing. However, for the elderly, falling can be life threatening. Among the elderly who break a hip, 80 percent die within a year.

University of Michigan researchers believe that the critical window of time between when the brain senses a fall and the muscles respond may help explain why so many older people suffer these serious falls. A better understanding of what happens in the brain and muscles during this lag could go a long way toward prevention.

To that end, researchers at the U-M School of Kinesiology developed a novel way of looking at the electrical response in the brain before and during a fall by using an electroencephalogram.

Findings showed that many areas of the brain sense and respond to a fall, but that happens well before the muscles react. Lead researcher Daniel Ferris likened the study method to recording an orchestra with many microphones and then teasing out the sounds of specific instruments. In the study, researchers measured electrical activity in different regions of the brain.

"We’re using an EEG in a way others don’t, to look at what’s going on inside the brain," said Ferris, a professor in kinesiology. "We were able to determine what parts of the brain first identify when you are losing your balance during walking."

During the study, healthy young subjects with electrodes attached to their scalps walked on a balance beam mounted to a treadmill. When participants lost their balance and went off the beam, they simply continued walking on the moving treadmill, thus avoiding injury.

Ferris and colleagues then used a method called independent components analysis to separate and visualize the electrical activity in different parts of the brain. They found that people sense the start of a fall much better with both feet on the ground. Two grounded feet make it easier to determine where the ground is relative to the body, but people aren’t as sure of their stability on one foot.

The researchers were surprised that so many different parts of the brain activate during a fall, and they didn’t expect the brain to recognize a loss of balance as early as it does.

Future studies comparing the elderly with younger subjects could determine if the elderly sense falls too late, in which case, pharmaceuticals might help them regain their balance. If it’s a simple motor problem such as muscles not responding properly, strengthening exercises could help.

Other experiments under the same grant in the Ferris lab look to separate sensory and motor contributions to brain activity during walking.

The study, “Loss of balance during balance beam walking elicits a broadly distributed theta band electrocortical response,” was published in advance online in the Journal of Neurophysiology.

Filed under brain activity EEG loss of balance sensorimotor cortex neuroscience science

162 notes

Our brains can (unconsciously) save us from temptation
Inhibitory self control – not picking up a cigarette, not having a second drink, not spending when we should be saving – can operate without our awareness or intention.
That was the finding by scientists at the University of Pennsylvania’s Annenberg School for Communication and the University of Illinois at Urbana-Champaign. They demonstrated through neuroscience research that inaction-related words in our environment can unconsciously influence our self-control. Although we may mindlessly eat cookies at a party, stopping ourselves from over-indulging may seem impossible without a deliberate, conscious effort. However, it turns out that overhearing someone – even in a completely unrelated conversation – say something as simple as “calm down” might trigger us to stop our cookie eating frenzy without realizing it.
The findings were reported in the journal Cognition by Justin Hepler, M.A., University of Illinois; and Dolores Albarracín, Ph.D., the Martin Fishbein Chair of Communication and a Professor of Psychology at Penn.
Volunteers completed a study where they were given instructions to press a computer key when they saw the letter “X” on the computer screen, or not press a key when they saw the letter “Y.” Their actions were affected by subliminal messages flashing rapidly on the screen. Action messages (“run,” “go,” “move,” “hit,” and “start”) alternated with inaction messages (“still,” “sit,” “rest,” “calm,” and “stop”) and nonsense words (“rnu,” or “tsi”). The participants were equipped with electroencephalogram recording equipment to measure brain activity.
The unique aspect of this test is that the action or inaction messages had nothing to do with the actions or inactions volunteers were doing, yet Hepler and Albarracín found that the action/inaction words had a definite effect on the volunteers’ brain activity. Unconscious exposure to inaction messages increased the activity of the brain’s self-control processes, whereas unconscious exposure to action messages decreased this same activity.
“Many important behaviors such as weight loss, giving up smoking, and saving money involve a lot of self-control,” the researchers noted. “While many psychological theories state that actions can be initiated automatically with little or no conscious effort, these same theories view inhibition as an effortful, consciously controlled process. Although reaching for that cookie doesn’t require much thought, putting it back on the plate seems to require a deliberate, conscious intervention. Our research challenges the long-held assumption that inhibition processes require conscious control to operate.”
The full article, “Complete unconscious control: Using (in)action primes to demonstrate completely unconscious activation of inhibitory control mechanisms,” will be available in the September issue of the journal.
(Image: Getty Images)

Our brains can (unconsciously) save us from temptation

Inhibitory self control – not picking up a cigarette, not having a second drink, not spending when we should be saving – can operate without our awareness or intention.

That was the finding by scientists at the University of Pennsylvania’s Annenberg School for Communication and the University of Illinois at Urbana-Champaign. They demonstrated through neuroscience research that inaction-related words in our environment can unconsciously influence our self-control. Although we may mindlessly eat cookies at a party, stopping ourselves from over-indulging may seem impossible without a deliberate, conscious effort. However, it turns out that overhearing someone – even in a completely unrelated conversation – say something as simple as “calm down” might trigger us to stop our cookie eating frenzy without realizing it.

The findings were reported in the journal Cognition by Justin Hepler, M.A., University of Illinois; and Dolores Albarracín, Ph.D., the Martin Fishbein Chair of Communication and a Professor of Psychology at Penn.

Volunteers completed a study where they were given instructions to press a computer key when they saw the letter “X” on the computer screen, or not press a key when they saw the letter “Y.” Their actions were affected by subliminal messages flashing rapidly on the screen. Action messages (“run,” “go,” “move,” “hit,” and “start”) alternated with inaction messages (“still,” “sit,” “rest,” “calm,” and “stop”) and nonsense words (“rnu,” or “tsi”). The participants were equipped with electroencephalogram recording equipment to measure brain activity.

The unique aspect of this test is that the action or inaction messages had nothing to do with the actions or inactions volunteers were doing, yet Hepler and Albarracín found that the action/inaction words had a definite effect on the volunteers’ brain activity. Unconscious exposure to inaction messages increased the activity of the brain’s self-control processes, whereas unconscious exposure to action messages decreased this same activity.

“Many important behaviors such as weight loss, giving up smoking, and saving money involve a lot of self-control,” the researchers noted. “While many psychological theories state that actions can be initiated automatically with little or no conscious effort, these same theories view inhibition as an effortful, consciously controlled process. Although reaching for that cookie doesn’t require much thought, putting it back on the plate seems to require a deliberate, conscious intervention. Our research challenges the long-held assumption that inhibition processes require conscious control to operate.”

The full article, “Complete unconscious control: Using (in)action primes to demonstrate completely unconscious activation of inhibitory control mechanisms,” will be available in the September issue of the journal.

(Image: Getty Images)

Filed under brain activity self-control EEG inhibition neuroscience science

75 notes

A new tool for brain research

Physicists and neuroscientists from The University of Nottingham and University of Birmingham have unlocked one of the mysteries of the human brain, thanks to new research using functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG).

image

The work will enable neuroscientists to map a kind of brain function that up to now could not be studied, allowing a more accurate exploration of how both healthy and diseased brains work.

Functional MRI is commonly used to study how the brain works, by providing spatial maps of where in the brain external stimuli, such as pictures and sounds, are processed. The fMRI scan does this by detecting indirect changes in the brain’s blood flow in response to changes in electrical signalling during the stimulus.

Combining techniques

A signal change that happens after the stimulus has stopped is also observed with the fMRI scan. This is called the post-stimulus signal and up until now it has not been used to study how the brain works because its origin was uncertain.

In novel experiments, the research team has now combined fMRI techniques with EEG, which measures electrical activity in the brain, to show that the post-stimulus signal also actually reflects changes in brain signalling.

18 healthy volunteers were monitored by using EEG to measure the electrical activity generated by their brains’ neurons (the signalling cells) while simultaneously recording fMRI measurements. A stimulus of electrical pulses was used to activate the part of the brain that controls movement in the right thumb.

The scientists then compared the EEG and fMRI signals and found that they both vary in the same way after the stimulus stops. This provides compelling evidence that the post-stimulus fMRI signal is a measure of neuronal activity rather than just changes in the brain’s blood flow. Curiously, the team also found the post-stimulus fMRI signal was not consistent, even though the stimulus input to the brain was the same each time. This natural variability in the brain response was also reflected by the EEG activity and the researchers suggest that this signal might help the brain make the transition from processing stimuli back to their internal thoughts in different ways.

New window

Dr Karen Mullinger from The University of Nottingham’s Sir Peter Mansfield Magnetic Resonance Centre said: “This work opens a new window of time in the fMRI signal in which we can look at what the brain is doing. It may also open up new research avenues in exploring the function of the healthy brain and the study of neurological diseases.”

Dr Stephen Mayhew from Birmingham University Imaging Centre said “We do not know what the exact role of the post-stimulus activity is or why this response is not always consistent when the stimulus input to the brain is the same. We have already secured funding through the Birmingham-Nottingham Strategic Collaboration Fund to continue this research into further understanding of human brain function using combinations of neuroimaging methods.”

Director of the Sir Peter Mansfield Magnetic Resonance Centre, Professor Peter Morris, said: “Functional magnetic resonance imaging is the main tool available to cognitive neuroscientists for the investigation of human brain function. The demonstration in this paper, that the secondary fMRI response (the post-stimulus undershoot) is not simply a passive blood flow response, but is directly related to synchronous neural activity, as measured with EEG, heralds an exciting new chapter in our understanding of the workings of the human mind.”

The work has been funded by the Medical Research Council (MRC), Engineering and Physical Science Research Council (EPSRC), The University of Nottingham Anne McLaren Fellowships and University of Birmingham Fellowship and is published in the Proceedings of the National Academy of Sciences (PNAS).

(Source: nottingham.ac.uk)

Filed under neuroimaging fMRI EEG brain function brain activity neurological diseases neuroscience science

free counters