Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

68 notes

Monkeys Use Minds to Move Two Virtual Arms

In a study led by Duke researchers, monkeys have learned to control the movement of both arms on an avatar using just their brain activity.

The findings, published Nov. 6, 2013, in the journal Science Translational Medicine, advance efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients.

To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date

Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices. The brain-machine interface approach, pioneered at the Duke University Center for Neuroengineering in the early 2000s, holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.

“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” said senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.

The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.

As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggest that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.

The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.

The study suggests that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions. Small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.

“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”

Nicolelis is incorporating the study’s findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton, which is currently being developed, during the opening ceremony of the 2014 FIFA World Cup.

Filed under brain activity prosthetics bimanual movements neurons plasticity neuroscience science

175 notes

Personal reflection triggers increased brain activity during depressive episodes
Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.
Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.
British Queen
Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.
Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.
“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.
“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”
Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.
Equally important
“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”
Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.
The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Personal reflection triggers increased brain activity during depressive episodes

Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.

Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.

British Queen

Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.

Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.

“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.

“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”

Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.

Equally important

“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”

Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.

The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Filed under anxiety depression neuroimaging brain activity frontal cortex psychology neuroscience science

94 notes

A new way to monitor induced comas
After suffering a traumatic brain injury, patients are often placed in a coma to give the brain time to heal and allow dangerous swelling to dissipate. These comas, which are induced with anesthesia drugs, can last for days. During that time, nurses must closely monitor patients to make sure their brains are at the right level of sedation — a process that MIT’s Emery Brown describes as “totally inefficient.”
“Someone has to be constantly coming back and checking on the patient, so that you can hold the brain in a fixed state. Why not build a controller to do that?” says Brown, the Edward Hood Taplin Professor of Medical Engineering in MIT’s Institute for Medical Engineering and Science, who is also an anesthesiologist at Massachusetts General Hospital (MGH) and a professor of health sciences and technology at MIT.
Brown and colleagues at MGH have now developed a computerized system that can track patients’ brain activity and automatically adjust drug dosages to maintain the correct state. They have tested the system  — which could also help patients who suffer from severe epileptic seizures — in rats and are now planning to begin human trials.
Maryam Shanechi, a former MIT grad student who is now an assistant professor at Cornell University, is the lead author of the paper describing the computerized system in the Oct. 31 online edition of the journal PLoS Computational Biology.
Tracking the brain
Brown and his colleagues have previously analyzed the electrical waves produced by the brain in different states of activity. Each state — awake, asleep, sedated, anesthetized and so on — has a distinctive electroencephalogram (EEG) pattern.
When patients are in a medically induced coma, the brain is quiet for up to several seconds at a time, punctuated by short bursts of activity. This pattern, known as burst suppression, allows the brain to conserve vital energy during times of trauma.
As a patient enters an induced coma, the doctor or nurse controlling the infusion of anesthesia drugs tries to aim for a particular number of “bursts per screen” as the EEG pattern streams across the monitor. This pattern has to be maintained for hours or days at a time.
“If ever there were a time to try to build an autopilot, this is the perfect time,” says Brown, who is a professor in MIT’s Department of Brain and Cognitive Sciences. “Imagine that you’re going to fly for two days and I’m going to give you a very specific course to maintain over long periods of time, but I still want you to keep your hand on the stick to fly the plane. It just wouldn’t make sense.”
To achieve automated control, Brown and colleagues built a brain-machine interface — a direct communication pathway between the brain and an external device that typically assists human cognitive, sensory or motor functions. In this case, the device — an EEG system, a drug-infusion pump, a computer and a control algorithm — uses the anesthesia drug propofol to maintain the brain at a target level of burst suppression.
The system is a feedback loop that adjusts the drug dosage in real time based on EEG burst-suppression patterns. The control algorithm interprets the rat’s EEG, calculates how much drug is in the brain, and adjusts the amount of propofol infused into the animal second-by-second.
The controller can increase the depth of a coma almost instantaneously, which would be impossible for a human to do accurately by hand. The system could also be programmed to bring a patient out of an induced coma periodically so doctors could perform neurological tests, Brown says.
This type of system could take much of the guesswork out of patient care, says Sydney Cash, an associate professor of neurology at Harvard Medical School.
“Much of what we do in medicine is making educated guesses as to what’s best for the patient at any given time,” says Cash, who was not part of the research team. “This approach introduces a methodology where doctors and nurses don’t need to guess, but can rely on a computer to figure out — in much more detail and in a time-efficient fashion — how much drug to give.”
Monitoring anesthesia
Brown believes that this approach could easily be extended to control other brain states, including general anesthesia, because each level of brain activity has its own distinctive EEG signature.
“If you can quantitatively analyze each state’s signature in real time and you have some notion of how the drug moves through the brain to generate those states, then you can build a controller,” he says.
There are currently no devices approved by the U.S. Food and Drug Administration (FDA) to control general anesthesia or induced coma. However, the FDA has recently approved a device that controls sedation not using EEG readings.
The MIT and MGH researchers are now preparing applications to the FDA to test the controller in humans.

A new way to monitor induced comas

After suffering a traumatic brain injury, patients are often placed in a coma to give the brain time to heal and allow dangerous swelling to dissipate. These comas, which are induced with anesthesia drugs, can last for days. During that time, nurses must closely monitor patients to make sure their brains are at the right level of sedation — a process that MIT’s Emery Brown describes as “totally inefficient.”

“Someone has to be constantly coming back and checking on the patient, so that you can hold the brain in a fixed state. Why not build a controller to do that?” says Brown, the Edward Hood Taplin Professor of Medical Engineering in MIT’s Institute for Medical Engineering and Science, who is also an anesthesiologist at Massachusetts General Hospital (MGH) and a professor of health sciences and technology at MIT.

Brown and colleagues at MGH have now developed a computerized system that can track patients’ brain activity and automatically adjust drug dosages to maintain the correct state. They have tested the system  — which could also help patients who suffer from severe epileptic seizures — in rats and are now planning to begin human trials.

Maryam Shanechi, a former MIT grad student who is now an assistant professor at Cornell University, is the lead author of the paper describing the computerized system in the Oct. 31 online edition of the journal PLoS Computational Biology.

Tracking the brain

Brown and his colleagues have previously analyzed the electrical waves produced by the brain in different states of activity. Each state — awake, asleep, sedated, anesthetized and so on — has a distinctive electroencephalogram (EEG) pattern.

When patients are in a medically induced coma, the brain is quiet for up to several seconds at a time, punctuated by short bursts of activity. This pattern, known as burst suppression, allows the brain to conserve vital energy during times of trauma.

As a patient enters an induced coma, the doctor or nurse controlling the infusion of anesthesia drugs tries to aim for a particular number of “bursts per screen” as the EEG pattern streams across the monitor. This pattern has to be maintained for hours or days at a time.

“If ever there were a time to try to build an autopilot, this is the perfect time,” says Brown, who is a professor in MIT’s Department of Brain and Cognitive Sciences. “Imagine that you’re going to fly for two days and I’m going to give you a very specific course to maintain over long periods of time, but I still want you to keep your hand on the stick to fly the plane. It just wouldn’t make sense.”

To achieve automated control, Brown and colleagues built a brain-machine interface — a direct communication pathway between the brain and an external device that typically assists human cognitive, sensory or motor functions. In this case, the device — an EEG system, a drug-infusion pump, a computer and a control algorithm — uses the anesthesia drug propofol to maintain the brain at a target level of burst suppression.

The system is a feedback loop that adjusts the drug dosage in real time based on EEG burst-suppression patterns. The control algorithm interprets the rat’s EEG, calculates how much drug is in the brain, and adjusts the amount of propofol infused into the animal second-by-second.

The controller can increase the depth of a coma almost instantaneously, which would be impossible for a human to do accurately by hand. The system could also be programmed to bring a patient out of an induced coma periodically so doctors could perform neurological tests, Brown says.

This type of system could take much of the guesswork out of patient care, says Sydney Cash, an associate professor of neurology at Harvard Medical School.

“Much of what we do in medicine is making educated guesses as to what’s best for the patient at any given time,” says Cash, who was not part of the research team. “This approach introduces a methodology where doctors and nurses don’t need to guess, but can rely on a computer to figure out — in much more detail and in a time-efficient fashion — how much drug to give.”

Monitoring anesthesia

Brown believes that this approach could easily be extended to control other brain states, including general anesthesia, because each level of brain activity has its own distinctive EEG signature.

“If you can quantitatively analyze each state’s signature in real time and you have some notion of how the drug moves through the brain to generate those states, then you can build a controller,” he says.

There are currently no devices approved by the U.S. Food and Drug Administration (FDA) to control general anesthesia or induced coma. However, the FDA has recently approved a device that controls sedation not using EEG readings.

The MIT and MGH researchers are now preparing applications to the FDA to test the controller in humans.

Filed under brain injury coma brain activity brain-machine interface anesthesia neuroscience science

53 notes

Brain Connectivity Can Predict Epilepsy Surgery Outcomes

A discovery from Case Western Reserve and Cleveland Clinic researchers could provide epilepsy patients invaluable advance guidance about their chances to improve symptoms through surgery.

Assistant Professor of Neurosciences Roberto Fernández Galán, PhD, and his collaborators have identified a new, far more accurate way to determine precisely what portions of the brain suffer from the disease. This information can give patients and physicians better information regarding whether temporal lobe surgery will provide the results they seek.

“Our analysis of neuronal activity in the temporal lobe allows us to determine whether it is diseased, and therefore, whether removing it with surgery will be beneficial for the patient,” Galán said, the paper’s senior author. “In terms of accuracy and efficiency, our analysis method is a significant improvement relative to current approaches.”

The findings appear in research published October 30 in the open access journal PLOS ONE.

About one-third of patients with temporal lobe epilepsy do not respond to medical treatment and opt to do lobectomies to alleviate their symptoms. Yet the surgery’s success rate is only 60 to 70 percent because of the difficulties in identifying the diseased brain tissue prior to the procedures.

Galán and investigators from Cleveland Clinic determined that using intracranial electroencephalography (iEEG) to measure patients’ functional neural connectivity – that is, the communication from one brain region to another - identified the epileptic lobe with 87 percent accuracy. An iEEG records electrical activity with electrodes implanted in the brain. Key indicators of a diseased lobe are weak and similar connections.

In the retrospective study, Galán and Arun Antony, MD, formerly a senior clinical fellow in the Epilepsy Center at Cleveland Clinic and now an assistant professor of neurology at the University of Pittsburgh, examined data from 23 patients with temporal lobe epilepsy who had all or part of their temporal lobes removed after iEEG evaluations performed at Cleveland Clinic. The researchers examined the results of patients’ preoperative iEEG to determine the degree of functional connectivity that was associated with successful surgical outcomes.

“The concept of functional connectivity has been extensively studied by basic science researchers, but has not found a way into the realm of clinical epilepsy treatment yet,” Antony said, the paper’s first author. “Our discovery is another step towards the use of measures of functional connectivity in making clinical decisions in the treatment of epilepsy.”

As a standard preoperative test for lobectomy surgery, physicians analyze iEEG traces looking for simultaneous discharges of neurons that appear as spikes in the recordings, which indicate epileptic activity. This PLOS ONE discovery evaluates the data differently by examining normal brain activity in the absence of spikes and inferring connectivity.

(Source: newswise.com)

Filed under epilepsy brain activity lobectomy intracranial electroencephalography neuroscience science

129 notes

Baby brains are tuned to the specific actions of others
Imitation may be the sincerest form of flattery for adults, but for babies it’s their foremost tool for learning. As renowned people-watchers, babies often observe others demonstrate how to do things and then copy those body movements. It’s how little ones know, usually without explicit instructions, to hold a toy phone to the ear or guide a spoon to the mouth.
Now researchers from the University of Washington and Temple University have found the first evidence revealing a key aspect of the brain processing that occurs in babies to allow this learning by observation.

The findings, published online Oct. 30 by PLOS ONE, are the first to show that babies’ brains showed specific activation patterns when an adult performed a task with different parts of her body. When 14-month-old babies simply watched an adult use her hand to touch a toy, the hand area of the baby’s brain lit up. When another group of infants watched an adult touch the toy using only her foot, the foot area of the baby’s brain showed more activity.

"Babies are exquisitely careful people-watchers, and they’re primed to learn from others," said Andrew Meltzoff, co-author and co-director of the UW Institute for Learning & Brain Sciences. "And now we see that when babies watch someone else, it activates their own brains. This study is a first step in understanding the neuroscience of how babies learn through imitation."

The study took advantage of how the brain is organized. The sensory and motor area of the cortex, the outer portion of the brain known for its creased appearance, is arranged by body part with each area of the body represented in identifiable neural real estate. Prick your finger, stick out your tongue, or kick a ball and distinct areas of the brain light up according to a somatotopic map.
Other studies show that adults show this somatotopic brain activation while watching someone else use different body parts, suggesting that adults understand the actions of others in relation to their own bodies. The researchers wondered whether the same would be true in babies.

The 70 infants in the study wore electroencephalogram, or EEG, caps with embedded sensors that detected brain activity in the regions of the cortex that respond to movement or touch of the feet and hands. Sitting on a parent’s lap, each baby watched as an experimenter touched a toy placed on a low table between the baby and the experimenter.

The toy had a clear plastic dome and was mounted on a sturdy base. When the experimenter pressed the dome with her hand or foot, music played and confetti in the dome spun. The experimenter repeated the action – taking breaks after every four presses – until the baby lost interest.
"Our findings show that when babies see others produce actions with a particular body part, their brains are activated in a corresponding way," said Joni Saby, lead author and a psychology graduate student at Temple University in Philadelphia. "This mapping may facilitate imitation and could play a role in the baby’s ability to then produce the same actions themselves."

One of the basics for babies to learn is how to copy what they see adults do. In other words, they must first know that it is indeed their hand and not their foot, mouth or other body part that is needed.
The new study shows that babies’ brains are organized in a somatotopic way that helps crack the interpersonal code. The connection between doing and seeing actions maps hand to hand, foot to foot, all before they can name those body parts through language.

"The reason this is exciting is that it gives insight into a crucial aspect of imitation," said co-author Peter Marshall, an associate psychology professor at Temple University. "To imitate the action of another person, babies first need to register what body part the other person used. Our findings suggest that babies do this in a particular way by mapping the actions of the other person onto their own body."
Meltzoff added, “The neural system of babies directly connects them to other people, which jump-starts imitation and social-emotional connectedness and bonding. Babies look at you and see themselves.”

Baby brains are tuned to the specific actions of others

Imitation may be the sincerest form of flattery for adults, but for babies it’s their foremost tool for learning. As renowned people-watchers, babies often observe others demonstrate how to do things and then copy those body movements. It’s how little ones know, usually without explicit instructions, to hold a toy phone to the ear or guide a spoon to the mouth.

Now researchers from the University of Washington and Temple University have found the first evidence revealing a key aspect of the brain processing that occurs in babies to allow this learning by observation.

The findings, published online Oct. 30 by PLOS ONE, are the first to show that babies’ brains showed specific activation patterns when an adult performed a task with different parts of her body. When 14-month-old babies simply watched an adult use her hand to touch a toy, the hand area of the baby’s brain lit up. When another group of infants watched an adult touch the toy using only her foot, the foot area of the baby’s brain showed more activity.

"Babies are exquisitely careful people-watchers, and they’re primed to learn from others," said Andrew Meltzoff, co-author and co-director of the UW Institute for Learning & Brain Sciences. "And now we see that when babies watch someone else, it activates their own brains. This study is a first step in understanding the neuroscience of how babies learn through imitation."

The study took advantage of how the brain is organized. The sensory and motor area of the cortex, the outer portion of the brain known for its creased appearance, is arranged by body part with each area of the body represented in identifiable neural real estate. Prick your finger, stick out your tongue, or kick a ball and distinct areas of the brain light up according to a somatotopic map.

Other studies show that adults show this somatotopic brain activation while watching someone else use different body parts, suggesting that adults understand the actions of others in relation to their own bodies. The researchers wondered whether the same would be true in babies.

The 70 infants in the study wore electroencephalogram, or EEG, caps with embedded sensors that detected brain activity in the regions of the cortex that respond to movement or touch of the feet and hands. Sitting on a parent’s lap, each baby watched as an experimenter touched a toy placed on a low table between the baby and the experimenter.

The toy had a clear plastic dome and was mounted on a sturdy base. When the experimenter pressed the dome with her hand or foot, music played and confetti in the dome spun. The experimenter repeated the action – taking breaks after every four presses – until the baby lost interest.

"Our findings show that when babies see others produce actions with a particular body part, their brains are activated in a corresponding way," said Joni Saby, lead author and a psychology graduate student at Temple University in Philadelphia. "This mapping may facilitate imitation and could play a role in the baby’s ability to then produce the same actions themselves."

One of the basics for babies to learn is how to copy what they see adults do. In other words, they must first know that it is indeed their hand and not their foot, mouth or other body part that is needed.

The new study shows that babies’ brains are organized in a somatotopic way that helps crack the interpersonal code. The connection between doing and seeing actions maps hand to hand, foot to foot, all before they can name those body parts through language.

"The reason this is exciting is that it gives insight into a crucial aspect of imitation," said co-author Peter Marshall, an associate psychology professor at Temple University. "To imitate the action of another person, babies first need to register what body part the other person used. Our findings suggest that babies do this in a particular way by mapping the actions of the other person onto their own body."

Meltzoff added, “The neural system of babies directly connects them to other people, which jump-starts imitation and social-emotional connectedness and bonding. Babies look at you and see themselves.”

Filed under motor cortex learning brain mapping brain activity infants psychology neuroscience science

109 notes

Study with totally blind people shows how light helps activate the brain

Light enhances brain activity during a cognitive task even in some people who are totally blind, according to a study conducted by researchers at the University of Montreal and Boston’s Brigham and Women’s Hospital. The findings contribute to scientists’ understanding of everyone’s brains, as they also revealed how quickly light impacts on cognition. “We were stunned to discover that the brain still respond significantly to light in these rare three completely blind patients despite having absolutely no conscious vision at all,” said senior co-author Steven Lockley. “Light doesn’t just allow us to see, it tells the brain whether it’s night or day which in –turn ensures that our physiology, metabolism and behavior are synchronized with environmental time”. “For diurnal species like ours, light stimulates day-like brain activity, improving alertness and mood, and enhancing performance on many cognitive tasks,” explained senior co-author Julie Carrier. The results indicate that their brains can still “see”, or detect, light via a novel photoreceptor in the ganglion cell layer of the retina, different from the rods and cones we use to see.

image

Scientists believe, however, that these specialized photoreceptors in the retina also contribute to visual function in the brain even when cells in the retina responsible for normal image formation have lost their ability to receive or process light. A previous study in a single blind patient suggested that this was possible but the research team wanted to confirm this result in different patients. To test this hypothesis, the three participants were asked to say whether a blue light was on or off, even though they could not see the light. “We found that the participants did indeed have a non-conscious awareness of the light – they were able to determine correctly when the light was on greater than chance without being able to see it,” explained first author Gilles Vandewalle.

The next steps involved looking closely at what happened to brain activation when light was flashed at their eyes at the same time as their attentiveness to a sound was monitored. “The objective of this second test was to determine whether the light affected the brain patterns associated with attentiveness – and it did,” said first author Olivier Collignon.

Finally, the participants underwent a functional MRI brain scan as they performed a simple sound matching task while lights were flashed in their eyes. “The fMRI further showed that during an auditory working memory task, less than a minute of blue light activated brain regions important to perform the task. These regions are involved in alertness and cognition regulation as well being as key areas of the default mode network,” Vandewalle explained. Researchers believe that the default network is linked to keeping a minimal amount of resources available for monitoring the environment when we are not actively doing something. “If our understanding of the default network is correct, our results raise the intriguing possibility that light is key to maintaining sustained attention” agreed Lockley and Carrier. “This theory may explain why the brain’s performance is improved when light is present during tasks.”

(Source: nouvelles.umontreal.ca)

Filed under brain activity blindness photoreceptors neuroimaging neuroscience science

291 notes

Method of recording brain activity could lead to mind-reading devices

A brain region activated when people are asked to perform mathematical calculations in an experimental setting is similarly activated when they use numbers — or even imprecise quantitative terms, such as “more than”— in everyday conversation, according to a study by Stanford University School of Medicine scientists.

image

Using a novel method, the researchers collected the first solid evidence that the pattern of brain activity seen in someone performing a mathematical exercise under experimentally controlled conditions is very similar to that observed when the person engages in quantitative thought in the course of daily life.

“We’re now able to eavesdrop on the brain in real life,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. Parvizi is the senior author of the study, published Oct. 15 in Nature Communications. The study’s lead authors are postdoctoral scholar Mohammad Dastjerdi, MD, PhD, and graduate student Muge Ozker.

The finding could lead to “mind-reading” applications that, for example, would allow a patient who is rendered mute by a stroke to communicate via passive thinking. Conceivably, it could also lead to more dystopian outcomes: chip implants that spy on or even control people’s thoughts.

“This is exciting, and a little scary,” said Henry Greely, JD, the Deane F. and Kate Edelman Johnson Professor of Law and steering committee chair of the Stanford Center for Biomedical Ethics, who played no role in the study but is familiar with its contents and described himself as “very impressed” by the findings. “It demonstrates, first, that we can see when someone’s dealing with numbers and, second, that we may conceivably someday be able to manipulate the brain to affect how someone deals with numbers.”

The researchers monitored electrical activity in a region of the brain called the intraparietal sulcus, known to be important in attention and eye and hand motion. Previous studies have hinted that some nerve-cell clusters in this area are also involved in numerosity, the mathematical equivalent of literacy.

However, the techniques that previous studies have used, such as functional magnetic resonance imaging, are limited in their ability to study brain activity in real-life settings and to pinpoint the precise timing of nerve cells’ firing patterns. These studies have focused on testing just one specific function in one specific brain region, and have tried to eliminate or otherwise account for every possible confounding factor. In addition, the experimental subjects would have to lie more or less motionless inside a dark, tubular chamber whose silence would be punctuated by constant, loud, mechanical, banging noises while images flashed on a computer screen.

“This is not real life,” said Parvizi. “You’re not in your room, having a cup of tea and experiencing life’s events spontaneously.” A profoundly important question, he said, is: “How does a population of nerve cells that has been shown experimentally to be important in a particular function work in real life?”

His team’s method, called intracranial recording, provided exquisite anatomical and temporal precision and allowed the scientists to monitor brain activity when people were immersed in real-life situations. Parvizi and his associates tapped into the brains of three volunteers who were being evaluated for possible surgical treatment of their recurring, drug-resistant epileptic seizures.

The procedure involves temporarily removing a portion of a patient’s skull and positioning packets of electrodes against the exposed brain surface. For up to a week, patients remain hooked up to the monitoring apparatus while the electrodes pick up electrical activity within the brain. This monitoring continues uninterrupted for patients’ entire hospital stay, capturing their inevitable repeated seizures and enabling neurologists to determine the exact spot in each patient’s brain where the seizures are originating.

During this whole time, patients remain tethered to the monitoring apparatus and mostly confined to their beds. But otherwise, except for the typical intrusions of a hospital setting, they are comfortable, free of pain and free to eat, drink, think, talk to friends and family in person or on the phone, or watch videos.

The electrodes implanted in patients’ heads are like wiretaps, each eavesdropping on a population of several hundred thousand nerve cells and reporting back to a computer.

In the study, participants’ actions were also monitored by video cameras throughout their stay. This allowed the researchers later to correlate patients’ voluntary activities in a real-life setting with nerve-cell behavior in the monitored brain region.

As part of the study, volunteers answered true/false questions that popped up on a laptop screen, one after another. Some questions required calculation — for instance, is it true or false that 2+4=5? — while others demanded what scientists call episodic memory — true or false: I had coffee at breakfast this morning. In other instances, patients were simply asked to stare at the crosshairs at the center of an otherwise blank screen to capture the brain’s so-called “resting state.”

Consistent with other studies, Parvizi’s team found that electrical activity in a particular group of nerve cells in the intraparietal sulcus spiked when, and only when, volunteers were performing calculations.

Afterward, Parvizi and his colleagues analyzed each volunteer’s daily electrode record, identified many spikes in intraparietal-sulcus activity that occurred outside experimental settings, and turned to the recorded video footage to see exactly what the volunteer had been doing when such spikes occurred.

They found that when a patient mentioned a number — or even a quantitative reference, such as “some more,” “many” or “bigger than the other one” — there was a spike of electrical activity in the same nerve-cell population of the intraparietal sulcus that was activated when the patient was doing calculations under experimental conditions.

That was an unexpected finding. “We found that this region is activated not only when reading numbers or thinking about them, but also when patients were referring more obliquely to quantities,” said Parvizi.

“These nerve cells are not firing chaotically,” he said. “They’re very specialized, active only when the subject starts thinking about numbers. When the subject is reminiscing, laughing or talking, they’re not activated.” Thus, it was possible to know, simply by consulting the electronic record of participants’ brain activity, whether they were engaged in quantitative thought during nonexperimental conditions.

Any fears of impending mind control are, at a minimum, premature, said Greely. “Practically speaking, it’s not the simplest thing in the world to go around implanting electrodes in people’s brains. It will not be done tomorrow, or easily, or surreptitiously.”

Parvizi agreed. “We’re still in early days with this,” he said. “If this is a baseball game, we’re not even in the first inning. We just got a ticket to enter the stadium.”

(Source: med.stanford.edu)

Filed under brain activity numerical cognition mind reading intraparietal sulcus parietal cortex neuroscience science

233 notes

Brain stimulation affects compliance with social norms
Neuroeconomists at the University of Zurich have identified a specific brain region that controls compliance with social norms. They discovered that norm compliance is independent of knowledge about the norm and can be increased by means of brain stimulation.
How does the human brain control compliance with social norms? The biological mechanisms that underlie norm compliance are still poorly understood. In a new study, Christian Ruff, Giuseppe Ugazio, and Ernst Fehr from the University of Zurich show that the lateral prefrontal cortex plays a central role in norm compliance.
Prefrontal cortex controls norm behavior 
For the study, 63 participants took part in an experiment in which they received money and were asked to decide how much of it they wanted to share with an anonymous partner. A prevalent fairness norm in Western cultures dictates that the money should be evenly split between the two players. However, this contrasts with the participants’ self-interest to keep as much money as possible for themselves. In another experiment, the participants were faced with the same decision, but knew in advance that they could be punished by the partner for an unfair proposal.
By means of a technique called “transcranial direct current stimulation,” which sends weak and painless electric currents through the skull, the excitability of specific brain regions can be modulated. During this experiment, the scientists used this technique to increase or decrease neural activity at the front of the brain, in the right lateral prefrontal cortex. Christian Ruff, Professor of Neuroeconomics and Decision Neuroscience at the University of Zurich, said: “We discovered that the decision to follow the fairness norm, whether voluntarily or under threat of sanctions, can be directly influenced by neural stimulation in the prefrontal cortex.”
Brain stimulation affects normative behavior
When neural activity in this part of the brain was increased via stimulation, the participants’ followed the fairness norm more strongly when sanctions were threatened, but their voluntary norm compliance in the absence of possible punishments decreased. Conversely, when the scientists decreased neural activity, participants followed the fairness norm more strongly on a voluntary basis, but complied less with the norm when sanctions were threatened. Moreover, neural stimulation influenced the participants’ behavior, but it did not affect their perception of the fairness norm. It also did not alter their expectations about whether and how much they would be punished for violating the norm.
"We found that the brain mechanism responsible for compliance with social norms is separate from the processes that represent one’s knowledge and beliefs about the social norm," says Ernst Fehr, Chairman of the Department of Economics at the University of Zurich. "This could have important implications for the legal system as the ability to distinguish between right and wrong may not be sufficient for the ability to comply with social norms." Christian Ruff adds: "Our findings show that a socially and evolutionarily important aspect of human behavior depends on a specific neural mechanism that can be both up- and down-regulated with brain stimulation."
Literature: 
Christian C. Ruff, Giuseppe Ugazio und Ernst Fehr. Changing Social Norm Compliance With Noninvasive Brain Stimulation. Science. October 3, 2013.
(Image: iStockphoto)

Brain stimulation affects compliance with social norms

Neuroeconomists at the University of Zurich have identified a specific brain region that controls compliance with social norms. They discovered that norm compliance is independent of knowledge about the norm and can be increased by means of brain stimulation.

How does the human brain control compliance with social norms? The biological mechanisms that underlie norm compliance are still poorly understood. In a new study, Christian Ruff, Giuseppe Ugazio, and Ernst Fehr from the University of Zurich show that the lateral prefrontal cortex plays a central role in norm compliance.

Prefrontal cortex controls norm behavior

For the study, 63 participants took part in an experiment in which they received money and were asked to decide how much of it they wanted to share with an anonymous partner. A prevalent fairness norm in Western cultures dictates that the money should be evenly split between the two players. However, this contrasts with the participants’ self-interest to keep as much money as possible for themselves. In another experiment, the participants were faced with the same decision, but knew in advance that they could be punished by the partner for an unfair proposal.

By means of a technique called “transcranial direct current stimulation,” which sends weak and painless electric currents through the skull, the excitability of specific brain regions can be modulated. During this experiment, the scientists used this technique to increase or decrease neural activity at the front of the brain, in the right lateral prefrontal cortex. Christian Ruff, Professor of Neuroeconomics and Decision Neuroscience at the University of Zurich, said: “We discovered that the decision to follow the fairness norm, whether voluntarily or under threat of sanctions, can be directly influenced by neural stimulation in the prefrontal cortex.”

Brain stimulation affects normative behavior

When neural activity in this part of the brain was increased via stimulation, the participants’ followed the fairness norm more strongly when sanctions were threatened, but their voluntary norm compliance in the absence of possible punishments decreased. Conversely, when the scientists decreased neural activity, participants followed the fairness norm more strongly on a voluntary basis, but complied less with the norm when sanctions were threatened. Moreover, neural stimulation influenced the participants’ behavior, but it did not affect their perception of the fairness norm. It also did not alter their expectations about whether and how much they would be punished for violating the norm.

"We found that the brain mechanism responsible for compliance with social norms is separate from the processes that represent one’s knowledge and beliefs about the social norm," says Ernst Fehr, Chairman of the Department of Economics at the University of Zurich. "This could have important implications for the legal system as the ability to distinguish between right and wrong may not be sufficient for the ability to comply with social norms." Christian Ruff adds: "Our findings show that a socially and evolutionarily important aspect of human behavior depends on a specific neural mechanism that can be both up- and down-regulated with brain stimulation."

Literature:

Christian C. Ruff, Giuseppe Ugazio und Ernst Fehr. Changing Social Norm Compliance With Noninvasive Brain Stimulation. Science. October 3, 2013.

(Image: iStockphoto)

Filed under social norms prefrontal cortex brain activity human behavior brain stimulation neuroscience science

152 notes

Get the picture? New high-​​res images show brain activity like never before

In the middle of the human brain there is a tiny struc­ture shaped like an elon­gated donut that plays a cru­cial role in man­aging how the body func­tions. Mea­suring just 10 mil­lime­ters in length and six mil­lime­ters in diam­eter, the hollow struc­ture is involved in a com­plex array of behav­ioral, cog­ni­tive, and affec­tive phe­nomena, such as the fight or flight response, pain reg­u­la­tion, and even sexual activity, according to North­eastern senior research sci­en­tist Ajay Satpute.
With a name longer than the struc­ture itself, the “mid­brain peri­aque­ductal gray region,” or PAG, is extra­or­di­narily dif­fi­cult to inves­ti­gate in humans because of its size and intri­cate struc­ture, he said.
In research pub­lished online this week in the journal Pro­ceed­ings of the National Academy of Sci­ence, Sat­pute and his col­leagues at Northeastern’s Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory explain how they hur­dled these chal­lenges by using  state-​​of-​​the art imaging to cap­ture this com­plex neural activity. The research could ulti­mately help sci­en­tists explore the grounds of human emo­tion like never before.
“The PAG’s func­tional prop­er­ties occur at such small spa­tial scales that we need to cap­ture its activity at very high res­o­lu­tion in order to under­stand it,” he explained.
Until recently, neu­roimaging studies have been car­ried out on func­tional mag­netic res­o­nance imaging, or fMRI, instru­ments con­taining mag­nets of up to three Teslas, a mea­sure of mag­netic field strength. These instru­ments pro­vide crit­ical data for under­standing how the brain’s dif­ferent areas respond to dif­ferent stimuli, but when those areas become suf­fi­ciently small and com­pli­cated, their res­o­lu­tion falls short.
In the case of the tiny PAG, this problem is para­mount because the PAG wraps around a hollow core, or “aque­duct,” con­taining cere­brospinal fluid, Sat­pute said. Tra­di­tional fMRI instru­ments cannot dis­tin­guish neural activity occur­ring in the PAG from that occur­ring in the CS fluid. Even more dif­fi­cult is iden­ti­fying where within the PAG itself spe­cific responses originate.
In col­lab­o­ra­tion with researchers at the Mass­a­chu­setts Gen­eral Hos­pital in Boston, Sat­pute and his col­leagues used a high-​​tech fMRI instru­ment that con­tains a seven-​​Tesla magnet. The force of the instru­ment is so strong (albeit harm­less) that one can feel its pull when simply walking by. Cou­pled with painstaking manual data analyses, Sat­pute was able to resolve activity in sub-​​regions of the PAG with more pre­ci­sion than ever before.
With their method in hand, the research team showed 11 human research sub­jects images of burn vic­tims, gory injuries, and other con­tent related to threat, harm, and loss while keeping tabs on the PAG’s activity. Researchers also showed the sub­jects neu­tral images such and then com­pared results between the two scenarios.
The proof-​​of-​​concept study showed emotion-​​related activity con­cen­trated in par­tic­ular areas of the PAG. While sim­ilar results have been demon­strated in animal models, nothing like it had pre­vi­ously been shown in human brains.
Using this method­ology, the researchers said they would not only gain a better under­standing of the PAG but also be able to inves­ti­gate a range of brain-​​related research ques­tions beyond this par­tic­ular structure.
Seven-​​Tesla brain imaging pro­vides an unprece­dented view of regions like the PAG while they respond to stimuli, said Lisa Feldman Bar­rett, director of the Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory. “Studies like this are a crit­ical step for­ward in bridging human and non­human animal studies of emo­tion, because they offer a level of res­o­lu­tion in human brains that was pre­vi­ously pos­sible only in studies of non-​​human animal,” she said.

Get the picture? New high-​​res images show brain activity like never before

In the middle of the human brain there is a tiny struc­ture shaped like an elon­gated donut that plays a cru­cial role in man­aging how the body func­tions. Mea­suring just 10 mil­lime­ters in length and six mil­lime­ters in diam­eter, the hollow struc­ture is involved in a com­plex array of behav­ioral, cog­ni­tive, and affec­tive phe­nomena, such as the fight or flight response, pain reg­u­la­tion, and even sexual activity, according to North­eastern senior research sci­en­tist Ajay Satpute.

With a name longer than the struc­ture itself, the “mid­brain peri­aque­ductal gray region,” or PAG, is extra­or­di­narily dif­fi­cult to inves­ti­gate in humans because of its size and intri­cate struc­ture, he said.

In research pub­lished online this week in the journal Pro­ceed­ings of the National Academy of Sci­ence, Sat­pute and his col­leagues at Northeastern’s Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory explain how they hur­dled these chal­lenges by using  state-​​of-​​the art imaging to cap­ture this com­plex neural activity. The research could ulti­mately help sci­en­tists explore the grounds of human emo­tion like never before.

The PAG’s func­tional prop­er­ties occur at such small spa­tial scales that we need to cap­ture its activity at very high res­o­lu­tion in order to under­stand it,” he explained.

Until recently, neu­roimaging studies have been car­ried out on func­tional mag­netic res­o­nance imaging, or fMRI, instru­ments con­taining mag­nets of up to three Teslas, a mea­sure of mag­netic field strength. These instru­ments pro­vide crit­ical data for under­standing how the brain’s dif­ferent areas respond to dif­ferent stimuli, but when those areas become suf­fi­ciently small and com­pli­cated, their res­o­lu­tion falls short.

In the case of the tiny PAG, this problem is para­mount because the PAG wraps around a hollow core, or “aque­duct,” con­taining cere­brospinal fluid, Sat­pute said. Tra­di­tional fMRI instru­ments cannot dis­tin­guish neural activity occur­ring in the PAG from that occur­ring in the CS fluid. Even more dif­fi­cult is iden­ti­fying where within the PAG itself spe­cific responses originate.

In col­lab­o­ra­tion with researchers at the Mass­a­chu­setts Gen­eral Hos­pital in Boston, Sat­pute and his col­leagues used a high-​​tech fMRI instru­ment that con­tains a seven-​​Tesla magnet. The force of the instru­ment is so strong (albeit harm­less) that one can feel its pull when simply walking by. Cou­pled with painstaking manual data analyses, Sat­pute was able to resolve activity in sub-​​regions of the PAG with more pre­ci­sion than ever before.

With their method in hand, the research team showed 11 human research sub­jects images of burn vic­tims, gory injuries, and other con­tent related to threat, harm, and loss while keeping tabs on the PAG’s activity. Researchers also showed the sub­jects neu­tral images such and then com­pared results between the two scenarios.

The proof-​​of-​​concept study showed emotion-​​related activity con­cen­trated in par­tic­ular areas of the PAG. While sim­ilar results have been demon­strated in animal models, nothing like it had pre­vi­ously been shown in human brains.

Using this method­ology, the researchers said they would not only gain a better under­standing of the PAG but also be able to inves­ti­gate a range of brain-​​related research ques­tions beyond this par­tic­ular structure.

Seven-​​Tesla brain imaging pro­vides an unprece­dented view of regions like the PAG while they respond to stimuli, said Lisa Feldman Bar­rett, director of the Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory. “Studies like this are a crit­ical step for­ward in bridging human and non­human animal studies of emo­tion, because they offer a level of res­o­lu­tion in human brains that was pre­vi­ously pos­sible only in studies of non-​​human animal,” she said.

Filed under brain activity brain mapping neuroimaging cere­brospinal fluid neuroscience science

195 notes

Covert operations: Your brain digitally remastered for clarity of thought
Neurofeedback can enhance the signal-to-noise ratio in thought, enabling a sharper focus on tasks—and a better understanding of brain-computer interfaces.
The sweep of a needle across the grooves of a worn vinyl record carries distinct sounds: hisses, scratches, even the echo of skips. For many years, though, those yearning to hear Frank Sinatra sing “Fly Me to the Moon” have been able to listen to his light baritone with technical clarity, courtesy of the increased signal-to-noise ratio of digital remasterings.
Now, with advances in neurofeedback techniques, the signal-to-noise ratio of the brain activity underlying our thoughts can be remastered as well, according to the recent discovery of a research team led by Stephen LaConte, an assistant professor at the Virginia Tech Carilion Research Institute.
LaConte and his colleagues specialize in real-time functional magnetic resonance imaging, a relatively new technology that can convert thought into action by transferring noninvasive measurements of human brain activity into control signals that drive physical devices and computer displays in real time. Crucially, for the ultimate goal of treating disorders of the brain, this rudimentary form of mind reading enables neurofeedback.
“Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria,” LaConte said. “Covert mental activities, on the other hand—such as visual imagery, inner language, or recollections of the past—can’t be observed by others and don’t necessarily translate into action in the outside world.”
But, LaConte added, brain–computer interfaces now enable us to eavesdrop on previously undetectable mental activities.
In the recent study, the scientists used whole-brain, classifier-based real-time functional magnetic resonance imaging to understand the neural underpinnings of brain–computer interface control. The research team asked two dozen subjects to control a visual interface by silently counting numbers at fast and slow rates. For half the tasks, the subjects were told to use their thoughts to control the movement of the needle on the device they were observing; for the other tasks, they simply watched the needle.
The scientists discovered a feedback effect that LaConte said he had long suspected existed but had found elusive: the subjects who were in control of the needle achieved a better whole-brain signal-to-noise ratio than those who simply watched the needle move. “When the subjects were performing the counting task without feedback, they did a pretty good job,” LaConte said. “But when they were doing it with feedback, we saw increases in the signal-to-noise ratio of the entire brain. This improved clarity could mean that the signal was sharpening, the noise was dropping, or both. I suspect the brain was becoming less noisy, allowing the subject to concentrate on the task at hand.”
The scientists also found that the act of controlling the computer–brain interface led to an increased classification accuracy, which corresponded with improvements in the whole-brain signal-to-noise ratio.
This enhanced signal-to-noise ratio, LaConte added, carries implications for brain rehabilitation. “When people undergoing real-time brain scans get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes,” LaConte said. “This, in turn, gives them the opportunity to aid in their own healing. Ultimately, we want to use this effect to find better ways to treat brain injuries and psychiatric and neurological disorders.”
“Dr. LaConte’s discovery represents a milestone in the development of noninvasive brain imaging approaches with potential for neurorehabilitation,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This research carries implications for people whose brains have been damaged, such as through traumatic injury or stroke, in ways that affect the motor system—how they walk, move an arm, or speak, for example. Dr. LaConte’s innovations with real-time functional brain imaging are helping to set the stage for the future, for capturing covert brain activity and creating better computer interfaces that can help people retrain their own brains.”

Covert operations: Your brain digitally remastered for clarity of thought

Neurofeedback can enhance the signal-to-noise ratio in thought, enabling a sharper focus on tasks—and a better understanding of brain-computer interfaces.

The sweep of a needle across the grooves of a worn vinyl record carries distinct sounds: hisses, scratches, even the echo of skips. For many years, though, those yearning to hear Frank Sinatra sing “Fly Me to the Moon” have been able to listen to his light baritone with technical clarity, courtesy of the increased signal-to-noise ratio of digital remasterings.

Now, with advances in neurofeedback techniques, the signal-to-noise ratio of the brain activity underlying our thoughts can be remastered as well, according to the recent discovery of a research team led by Stephen LaConte, an assistant professor at the Virginia Tech Carilion Research Institute.

LaConte and his colleagues specialize in real-time functional magnetic resonance imaging, a relatively new technology that can convert thought into action by transferring noninvasive measurements of human brain activity into control signals that drive physical devices and computer displays in real time. Crucially, for the ultimate goal of treating disorders of the brain, this rudimentary form of mind reading enables neurofeedback.

“Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria,” LaConte said. “Covert mental activities, on the other hand—such as visual imagery, inner language, or recollections of the past—can’t be observed by others and don’t necessarily translate into action in the outside world.”

But, LaConte added, brain–computer interfaces now enable us to eavesdrop on previously undetectable mental activities.

In the recent study, the scientists used whole-brain, classifier-based real-time functional magnetic resonance imaging to understand the neural underpinnings of brain–computer interface control. The research team asked two dozen subjects to control a visual interface by silently counting numbers at fast and slow rates. For half the tasks, the subjects were told to use their thoughts to control the movement of the needle on the device they were observing; for the other tasks, they simply watched the needle.

The scientists discovered a feedback effect that LaConte said he had long suspected existed but had found elusive: the subjects who were in control of the needle achieved a better whole-brain signal-to-noise ratio than those who simply watched the needle move. “When the subjects were performing the counting task without feedback, they did a pretty good job,” LaConte said. “But when they were doing it with feedback, we saw increases in the signal-to-noise ratio of the entire brain. This improved clarity could mean that the signal was sharpening, the noise was dropping, or both. I suspect the brain was becoming less noisy, allowing the subject to concentrate on the task at hand.”

The scientists also found that the act of controlling the computer–brain interface led to an increased classification accuracy, which corresponded with improvements in the whole-brain signal-to-noise ratio.

This enhanced signal-to-noise ratio, LaConte added, carries implications for brain rehabilitation. “When people undergoing real-time brain scans get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes,” LaConte said. “This, in turn, gives them the opportunity to aid in their own healing. Ultimately, we want to use this effect to find better ways to treat brain injuries and psychiatric and neurological disorders.”

“Dr. LaConte’s discovery represents a milestone in the development of noninvasive brain imaging approaches with potential for neurorehabilitation,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This research carries implications for people whose brains have been damaged, such as through traumatic injury or stroke, in ways that affect the motor system—how they walk, move an arm, or speak, for example. Dr. LaConte’s innovations with real-time functional brain imaging are helping to set the stage for the future, for capturing covert brain activity and creating better computer interfaces that can help people retrain their own brains.”

Filed under neuroimaging brain mapping brain activity brain-computer interface technology neuroscience science

free counters