Posts tagged brainwaves

Posts tagged brainwaves
Playing computer games makes brains feel and think alike
Scientists have discovered that playing computer games can bring players’ emotional responses and brain activity into unison.
By measuring the activity of facial muscles and imaging the brain while gaming, the group found out that people go through similar emotions and display matching brainwaves. The study of Helsinki Institute for Information Technology HIIT researchers is now published in PLOS ONE.
– It’s well known that people who communicate face-to-face will start to imitate each other. People adopt each other’s poses and gestures, much like infectious yawning. What is less known is that the very physiology of interacting people shows a type of mimicry – which we call synchrony or linkage, explains Michiel Sovijärvi-Spapé.
In the study, test participants play a computer game called Hedgewars, in which they manage their own team of animated hedgehogs and in turns shoot the opposing team with ballistic artillery. The goal is to destroy the opposing team’s hedgehogs. The research team varied the amount of competitiveness in the gaming situation: players teamed up against the computer and they were also pinned directly against each other.
The players were measured for facial muscle reactions with facial electromyography, or fEMG, and their brainwaves were measured with electroencephalography, EEG.
– Replicating previous studies, we found linkage in the fEMG: two players showed both similar emotions and similar brainwaves at similar times. We further observed a linkage also in the brainwaves with EEG, tells Sovijärvi-Spapé.
A striking discovery indicates further that the more competitive the gaming gets, the more in sync are the emotional responses of the players. The test subjects were to report emotions themselves, and negative emotions were associated with the linkage effect.
– Although counterintuitive, the discovered effect increases as a game becomes more competitive. And the more competitive it gets, the more the players’ positive emotions begin to reflect each other. All the while their experiences of negative emotions increase.
The results present promising upshots for further study.
– Feeling others’ emotions could be particularly beneficial in competitive settings: the linkage may enable one to better anticipate the actions of opponents.
Another interpretation suggested by the group is that the physical linkage of emotion may work to compensate a possibly faltering social bond while competing in a gaming setting.
– Since our participants were all friends before the game, we can speculate that the linkage is most prominent when a friendship is ‘threatened’ while competing against each other, ponders Sovijärvi-Spapé.
Rats! Humans and rodents face their errors
What happens when the brain recognizes an error? A new study shows that the brains of humans and rats adapt in a similar way to errors by using low-frequency brainwaves in the medial frontal cortex to synchronize neurons in the motor cortex. The finding could be important in studies of “adaptive control” like obsessive compulsive disorder, ADHD, and Parkinson’s.
People and rats may think alike when they’ve made a mistake and are trying to adjust their thinking.
That’s the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.
“These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex,” wrote the authors, who performed the research at Brown University and Yale University.
The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such “adaptive control” neural mechanics are compromised in psychiatric diseases.
“With this rat model of adaptive control, we are now able to examine whether novel drugs or other treatment procedures boost the integrity of this system,” said James Cavanagh, co-lead author of the paper who was at Brown when the research was done and has since become assistant professor of psychology at the University of New Mexico. “This may have clear translational potential for treating psychiatric diseases such as obsessive compulsive disorder, depression, attention deficit hyperactivity disorder, Parkinson’s disease and schizophrenia.”
To conduct the study, the researchers measured external brainwaves of human and rodent subjects after both erroneous and accurate performance on the time estimation task. They also measured the activity of individual neurons in the MFC and motor cortex of the rats in both post-error and post-correct circumstances.
The scientists also gave the rats a drug that blocked activity of the MFC. What they saw in those rats compared to rats who didn’t get the drug, was that the low-frequency waves did not occur in the motor cortex, neurons there did not fire coherently and the rats did not alter their subsequent behavior on the task.
Although the researchers were able to study the cognitive mechanisms in the rats in more detail than in humans, the direct parallels they saw in the neural mechanics of adaptive control were significant.
“Low-frequency oscillations facilitate synchronization among brain networks for representing and exerting adaptive control, including top-down regulation of behavior in the mammalian brain,” they wrote.
![Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."](http://40.media.tumblr.com/9065dbf8f11ead256e3e5a55b5d20b2e/tumblr_mtqn9mjEW61rog5d1o1_400.jpg)
Stanford scientists build a ‘brain stethoscope’ to turn seizures into music
When Chris Chafe and Josef Parvizi began transforming recordings of brain activity into music, they did so with artistic aspirations. The professors soon realized, though, that the work could lead to a powerful biofeedback tool for identifying brain patterns associated with seizures.
Josef Parvizi was enjoying a performance by the Kronos Quartet when the idea struck. The musical troupe was midway through a piece in which the melodies were based on radio signals from outer space, and Parvizi, a neurologist at Stanford Medical Center, began wondering what the brain’s electrical activity might sound like set to music.
He didn’t have to look far for help. Chris Chafe, a professor of music research at Stanford, is one of the world’s foremost experts in “musification,” the process of converting natural signals into music. One of his previous works involved measuring the changing carbon dioxide levels near ripening tomatoes and converting those changing levels into electronic performances.
Parvizi, an associate professor, specializes in treating patients suffering from intractable seizures. To locate the source of a seizure, he places electrodes in patients’ brains to create electroencephalogram (EEG) recordings of both normal brain activity and a seizure state.
He shared a consenting patient’s EEG data with Chafe, who began setting the electrical spikes of the rapidly firing neurons to music. Chafe used a tone close to a human’s voice, in hopes of giving the listener an empathetic and intuitive understanding of the neural activity.
Upon a first listen, the duo realized they had done more than create an interesting piece of music. [Listen to the audio here]
"My initial interest was an artistic one at heart, but, surprisingly, we could instantly differentiate seizure activity from non-seizure states with just our ears," Chafe said. "It was like turning a radio dial from a static-filled station to a clear one."
If they could achieve the same result with real-time brain activity data, they might be able to develop a tool to allow caregivers for people with epilepsy to quickly listen to the patient’s brain waves to hear whether an undetected seizure might be occurring.
Parvizi and Chafe dubbed the device a “brain stethoscope.”
The sound of a seizure
The EEGs Parvizi conducts register brain activity from more than 100 electrodes placed inside the brain; Chafe selects certain electrode/neuron pairings and allows them to modulate notes sung by a female singer. As the electrode captures increased activity, it changes the pitch and inflection of the singer’s voice.
Before the seizure begins – during the so-called pre-ictal stage – the peeps and pops from each “singer” almost synchronize and fall into a clear rhythm, as if they’re following a conductor, Chafe said.
In the moments leading up to the seizure event, though, each of the singers begins to improvise. The notes become progressively louder and more scattered, as the full seizure event occurs (the ictal state). The way Chafe has orchestrated his singers, one can hear the electrical storm originate on one side of the brain and eventually cross over into the other hemisphere, creating a sort of sing-off between the two sides of the brain.
After about 30 seconds of full-on chaos, the singers begin to calm, trailing off into their post-ictal rhythm. Occasionally, one or two will pipe up erratically, but on the whole, the choir sounds extremely fatigued.
It’s the perfect representation of the three phases of a seizure event, Parvizi said.
Part art exhibit, part experiment
Caring for a person with seizures can be very difficult, as not all seizure activity manifests itself with behavioral cues. It’s often impossible to know whether a person with epilepsy is acting confused because they are having a seizure, or if they are experiencing the type of confusion that is a marker of the post-ictal seizure phase.
To that end, Parvizi and Chafe hope to apply their work to develop a device that listens for the telltale brain patterns of an ongoing seizure or a post-ictal fatigued brain state.
"Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place," Parvizi said.
The device can also offer biofeedback to non-epileptic patients who want to hear the music their own brain waves create.
The effort to build this device is funded by Stanford’s Bio-X Interdisciplinary Initiatives Program (Bio-X IIP), which provides money for interdisciplinary projects that have potential to improve human health in innovative ways. Bio-X seed grants have funded 141 research collaborations connecting hundreds of faculty since 2000. The proof-of-concept projects have produced hundreds of publications, dozens of patents, and more than a tenfold return on research funds to Stanford.
From a clinical perspective, the work is still very experimental.
"We’ve really just stuck our finger in there," Chafe said. "We know that the music is fascinating and that we can hear important dynamics, but there are still wonderful revelations to be made."
Next year, Chafe and Parvizi plan to unveil a version of the system at Stanford’s Cantor Arts Center. Visitors will don a headset that will transmit an EEG of their brain activity to their handheld device, which will convert it into music in real time.
"This is what I like about Stanford," Parvizi said. "It nurtures collaboration between fields that are seemingly light-years apart – we’re neurology and music professors! – and our work together will hopefully make a positive impact on the world we live in."
Toward an early diagnostic tool for Alzheimer’s disease
Despite all the research done on Alzheimer’s, there is still no early diagnostic tool for the disease. By looking at the brain wave components of individuals with the disease, Professor Tiago H. Falk of INRS’s Centre Énergie Matériaux Télécommunications has identified a promising avenue of research that may not only help diagnose the disease, but also assess its severity. This non-invasive, objective method is the subject of an article in the journal PLOS ONE.
Patients with Alzheimer’s disease currently undergo neuropsychological testing to detect signs of the disease. The test results are difficult to interpret and are insufficient for making a definitive diagnosis. But as scientists have already discovered, activity in certain areas of the cerebral cortex is affected even in the early stages of the disease. Professor Falk, who specialises in biological signal acquisition, examined this phenomenon and compared the electroencephalograms (EEGs) of healthy individuals (27), individuals with mild Alzheimer’s (27), and individuals with moderate cases of the disease (22). He found statistically significant differences across the three groups.
In collaboration with neurologists and Francisco J. Fraga, an INRS visiting professor specializing in biological signals, Professor Falk used an algorithm that dissects brain waves of varying frequencies. “What makes this algorithm innovative is that it characterizes the changes in temporal dynamics of the patients’ brain waves,” explains Professor Falk. “The findings show that healthy individuals have different patterns than those with mild Alzheimer’s disease. We also found a difference between patients with mild levels of the disease and those with moderate Alzheimer’s.”
To validate the model in order to eventually develop an early diagnostic tool for Alzheimer’s disease, Professor Falk’s team is sharing their algorithm on the NeuroAccelerator.org online data analysis portal. It is the first open source algorithm posted on the portal and may be used by researchers around the world to produce additional research findings.
Alzheimer’s disease accounts for 60% to 80% of all dementia cases in North America and is skyrocketing. This step toward the development of an early diagnostic tool that is non-invasive, objective, and relatively inexpensive is therefore welcome news for the research community.
Why Some Remember Dreams, Others Don’t
People who tend to remember their dreams also respond more strongly than others to hearing their name when they’re awake, new research suggests.
Everyone dreams during sleep, but not everyone recalls the mental escapade the next day, and scientists aren’t sure why some people remember more than others.
To find out, researchers used electroencephalography to record the electrical activity in the brains of 36 people while the participants listened to background tunes, and occasionally heard their own first name. The brain measurements were taken during wakefulness and sleep. Half of the participants were called high recallers, because they reported remembering their dreams almost every day, whereas the other half, low recallers, said they only remembered their dreams once or twice a month.
When asleep, both groups showed similar changes in brain activity in response to hearing their names, which were played quietly enough not to wake them.
However, when awake, high recallers showed a more sustained decrease in a brain wave called the alpha wave when they heard their names, compared with the low recallers.
"It was quite surprising to see a difference between the groups during wakefulness," said study researcher Perrine Ruby, neuroscientist at Lyon Neuroscience Research Center in France.
The difference could reflect variations in the brains of high and low recallers that could have a role in how they dream, too, Ruby said.
Who remembers their dreams
A well-established theory suggests that a decrease in the alpha wave is a sign that brain regions are being inhibited from responding to outside stimuli. Studies show that when people hear a sudden sound or open their eyes, and more brain regions become active, the alpha wave is reduced.
In the study, as predicted, both groups showed a decrease in the alpha wave when they heard their names while awake. But high recallers showed a more prolonged decrease, which may be a sign their brains became more widely activated when they heard their names.
In other words, high recallers may engage more brain regions when processing sounds while awake, compared with low recallers, the researchers said.
While people are asleep, the alpha wave behaves in the opposite way —it increases when a sudden sound is heard. Scientists aren’t certain why this happens, but one idea is that it protects the brain from being interrupted by sounds during sleep, Ruby said.
Indeed, the study participants showed an increase in the alpha wave in response to sounds during sleep, and there was no difference between the groups.
One possibility to explain the lack of difference, the researchers said, could be that perhaps high recallers had a larger increase in alpha waves, but it was so high that they woke up.
Time spent awake, during the night
The researchers saw that high recallers awoke more frequently during the night. They were awake, on average, for 30 minutes during the night, whereas low recallers were awake for 14 minutes. However, Ruby said “both figures are in the normal range, it’s not that there’s something wrong with either group.”
Altogether, the results suggest the brain of high recallers may be more reactive to stimuli such as sounds, which could make them wake up more easily. It is more likely a person would remember their dreams if they are awakened immediately after one, Ruby said.
However, waking up at night can account for only a part of the differences people show in remembering dreams. “There’s still much more to understand,” she said.
The study is published online (Aug. 13) in the journal Frontiers in Psychology.
Neuroscientists often use electroencephalography (EEG) as an inexpensive way to record electrical signals in the brain. Though it would be useful to run these recordings for long periods of time, that usually isn’t practical: EEG recording traditionally involves attaching many electrodes and cables to a patient’s scalp.
Now engineers at Imperial College in London have developed an EEG device that can be worn inside the ear, like a hearing aid. They say the device will allow scientists to record EEGs for several days at a time; this would allow doctors to monitor patients who have regularly recurring problems like seizures or microsleep.

“The ideal is to have a very stable recording system, and recordings which are repeatable,” explains co-creator Danilo Mandic. “It’s not interfering with your normal life, because there are acoustic vents so people can hear. After a while, they forget they’re having an EEG.”
By nestling the EEG inside the ear, the engineers avoid a lot of signal noise usually introduced by body movement. They can also ensure that the electrodes are always placed in exactly the same spot, which, they say, will make repeated readings more reliable.
Since the device attaches to just one area, it can record only from the temporal region. This limits its potential applications to events that involve local activity. Tzzy-Ping Jung, co-director of the University of California, San Diego’s Center for Advanced Neurological Engineering, says that this does not mean the device will not be valuable.
“Different modalities will have different applications. I would not rule out the usefulness of any modalities,” says Jung. “I think it’s a very good idea with very promising results.”
(Source: technologyreview.com)
Neural Simulations Hint at the Origin of Brain Waves
At EPFL’s Blue Brain facilities, computer models of individual neurons are being assembled into neural circuits that produce electrical signals akin to brain waves. The results, published in the journal Neuron, are helping solve the mystery of how and why these signals arise in the brain.
For almost a century, scientists have been studying brain waves to learn about mental health and the way we think. Yet the way billions of interconnected neurons work together to produce brain waves remains unknown. Now, scientists from EPFL’s Blue Brain Project in Switzerland, at the core of the European Human Brain Project, and the Allen Institute for Brain Science in the United States, show in the July 24th edition of the journal Neuron how a complex computer model is providing a new tool to solve the mystery.
The brain is composed of many different types of neurons, each of which carry electrical signals. Electrodes placed on the head or directly in brain tissue allow scientists to monitor the cumulative effect of this electrical activity, called electroencephalography (EEG) signals. But what is it about the structure and function of each and every neuron, and the way they network together, that give rise to these electrical signals measured in a mammalian brain?
Modeling Brain Circuitry
The Blue Brain Project is working to model a complete human brain. For the moment, Blue Brain scientists study rodent brain tissue and characterize different types of neurons to excruciating detail, recording their electrical properties, shapes, sizes, and how they connect.
To answer the question of brain-wave origin, researchers at EPFL’s Blue Brain Project and the Allen Institute joined forces with the help of the Blue Brain modeling facilities. Their work is based on a computer model of a neural circuit the likes of which have never been seen before, encompassing an unprecedented amount of detail and simulating 12,000 neurons.
“It is the first time that a model of this complexity has been used to study the underlying properties of brain waves,” says EPFL scientist Sean Hill.
In observing their model, the researchers noticed that the electrical activity swirling through the entire system was reminiscent of brain waves measured in rodents. Because the computer model uses an overwhelming amount of physical, chemical and biological data, the supercomputer simulation allows scientists to analyze brain waves at a level of detail simply unattainable with traditional monitoring of live brain tissue.
“We need a computer model because it is impossible to relate the electrical activity of potentially billions of individual neurons and the resulting brain waves at the same time,” says Hill. “Through this view, we’re able to provide an interpretation, at the single-neuron level, of brain waves that are measured when tissue is actually probed in the lab.”
Finding brain wave analogs
Neurons are somewhat like tiny batteries, needing to be charged in order to fire off an electrical impulse known as a “spike”. It is through these “spikes” that neurons communicate with each other to produce thought and perception. To “recharge” a neuron, charged particles called ions must travel through miniscule ionic channels. These channels are like gates that regulate electrical current. Ultimately, the accumulation of multiple electrical signals throughout the entire circuit of neurons produces brain waves.
The challenge for scientists in this study was to incorporate into the simulation the thousands of parameters, per neuron, that describe these electrical properties. Once they did that, they saw that the overall electrical activity in their model of 12,000 neurons was akin to observations of brain activity in rodents, hinting at the origin of brain waves.
“Our model is still incomplete, but the electrical signals produced by the computer simulation and what was actually measured in the rat brain have some striking similarities,” says Allen Institute scientist Costas Anastassiou.
Hill adds, “For the first time, we show that the complex behavior of ion channels on the branches of the neurons contributes to the shape of brain waves.”
There is still much work to be done in order to arrive at a complete simulation. While the model’s electrical signals are analogous to in vivo measurements, researchers warn that there are still many open questions as well as room to improve the model. For instance, the simulation is modeled on neurons that control the hind-limb, while in vivo data represent brain waves coming from neurons that have a similar function but control whiskers instead.
“Even so, the computer model we used allowed us to characterize, and more importantly quantify, key features of how neurons produce these signals,” says Anastassiou.
The scientists are currently studying similar brain wave phenomena in larger and more realistic neural circuits.
This computer model is drawing cellular biophysics and cognitive neuroscience closer together, in order to achieve the same goal: understanding the brain. But the two disciplines share neither the methods nor the scientific language. By simulating electrical brain activity and relating the behavior of single neurons to brain waves, the researchers aim to bridge this gap, opening the way to better tools for diagnosing mental disorders, and on a deeper level, offering a better understanding of ourselves.
New tasks become as simple as waving a hand with brain-computer interfaces
Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.
Now, University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.
“What we’re seeing is that practice makes perfect with these tasks,” said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. “There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up.”
Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, and Jeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.
In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.
The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor’s movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.
Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.
“Now we have a brain marker that shows a patient has actually learned a task,” Ojemann said. “Once the signal has turned off, you can assume the person has learned it.”
While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.
“We now have a larger-scale view of what’s happening in the brain of a subject as he or she is learning a task,” Rao said. “The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren’t directly involved to get the job done.”
Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person’s head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn’t very reliable yet because signals from eye blinking and other muscle movements interfere too much.
A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburgh have demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.
The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person’s head for a longer time to be able to control computer cursors or robotic limbs at home.
“This is one push as to how we can improve the devices and make them more useful to people,” Wander said. “If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly.”
The research team, along with the National Science Foundation’s Engineering Research Center for Sensorimotor Neural Engineering headquartered at the UW, will continue developing these technologies.
Helicopter takes to the skies with the power of thought
A remote controlled helicopter has been flown through a series of hoops around a college gymnasium in Minnesota.
It sounds like your everyday student project; however, there is one caveat…the helicopter was controlled using just the power of thought.
The experiments have been performed by researchers hoping to develop future robots that can help restore the autonomy of paralysed victims or those suffering from neurodegenerative disorders.
Their study has been published today, 4 June 2013, in IOP Publishing’s Journal of Neural Engineering and is accompanied by a video of the helicopter control in action.
There were five subjects (three female, two male) who took part in the study and each one was able to successfully control the four-blade helicopter, also known as a quadcopter, quickly and accurately for a sustained amount of time.
Lead author of the study Professor Bin He, from the University of Minnesota College of Science and Engineering, said: “Our study shows that for the first time, humans are able to control the flight of flying robots using just their thoughts, sensed from noninvasive brain waves.”
The noninvasive technique used was electroencephalography (EEG), which recorded the electrical activity of the subjects’ brain through a cap fitted with 64 electrodes.
Facing away from the quadcopter, the subjects were asked to imagine using their right hand, left hand, and both hands together; this would instruct the quadcopter to turn right, left, lift, and then fall, respectively. The quadcopter was driven with a pre-set forward moving velocity and controlled through the sky with the subject’s thoughts.
The subjects were positioned in front of a screen which relayed images of the quadcopter’s flight through an on-board camera, allowing them to see which direction it was travelling in. Brain signals were recorded by the cap and sent to the quadcopter over WiFi.
“In previous work we showed that humans could control a virtual helicopter using just their thoughts. I initially intended to use a small helicopter for this real-life study; however, the quadcopter is more stable, smooth and has fewer safety concerns,” continued Professor He.
After several different training sessions, the subjects were required to fly the quadcopter through two foam rings suspended from the gymnasium ceiling and were scored on three aspects: the number of times they sent the quadcopter through the rings; the number of times the quadcopter collided with the rings; and the number of times they went outside the experiment boundary.
A number of statistical tests were used to calculate how each subject performed.
A group of subjects also directed the quadcopter with a keyboard in a control experiment, allowing for a comparison between a standardised method and brain control.
This process is just one example of a brain–computer interface where a direct pathway between the brain and an external device is created to help assist, augment or repair human cognitive or sensory-motor functions; researchers are currently looking at ways to restore hearing, sight and movement using this approach.
“Our next goal is to control robotic arms using noninvasive brain wave signals, with the eventual goal of developing brain–computer interfaces that aid patients with disabilities or neurodegenerative disorders,” continued Professor He.

Painting through the power of thought enabled by scientists
To the viewer it is an accomplished semiabstract image of flowers and clouds, but in fact this painting was produced by a paralysed woman solely through the power of thought.
Heide Pfützner, a former teacher from Leipzig, Germany, was diagnosed with Amyotrophic Lateral Sclerosis, also known as Motor Neurone Disease, yet she has managed to produce a series of the paintings with the aid of a new brain controlled computer.
She has been trained to master the device that uses brain waves to take control of a palette of colours, shapes and brushes to produce digital artworks.
Building on decades of knowledge about the meaning of the tiny electrical impulses created by the brain during thought, scientists have been able to create a computer programme which translates thoughts into electronic images.
As well as helping patients with progressive brain diseases like Mrs Pfützner, other users of the device include those who are “locked in” to a physically unresponsive state and therefore unable to communicate with the rest of the world.
The system works by detecting changes in the pattern of the user’s brain waves to allow them to select options in software and to move a cursor around a screen in front of them.
Read more