Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

34 notes

Gets stroke patients back on their feet
A robot is now being built to help stroke patients with training, motivation and walking.
In Europe, strokes are the most common cause of physical disability among the elderly. This often result in paralysis of one side of the body, and many patients suffer much reduced physical mobility and are often unable to walk on their own. These are the hard facts the EU project CORBYS has taken seriously. Researchers in six countries are currently developing a robotic system designed to help stroke patients re-train their bodies. The concept is based on helping the patient by constructing a system consisting of powered orthosis to help patient in moving his/her legs and a mobile platform providing patient mobility.
The CORBYS researchers are also working with the cognitive aspects. The aim is to enable the robot to interpret data from the patient and adapt the training programme to his or her capabilities and intention. This will bring rehabilitation robots to the next level.
Back to walking normallyIt is vital to get stroke patients up on their feet as soon as possible. They must have frequent training exercises, and re-learn how to walk so that they can function as good as possible on their own.Why a robot? “Absolutely, because it is difficult to meet these requirements using today’s work-intensive manual method where two therapists assisting the patient by lifting one leg after the other”, says ICT researcher Anders Liverud at SINTEF, which is one of the CORBYS project partners.
Robot-patient learningCORBYS involves the use of physiological data such as heart rate, temperature and muscle activity measurements to provide feedback to the therapist and help control the robot. Do the patient’s legs always go where the patient want? Is the patient getting tired and stressed?
“The walking robot has several settings, and the therapist selects the correct mode based on how far the patient has come in his or her rehabilitation”, says Liverud. “The first step is to attach sensors to the patient’s body and let them walk on a treadmill. A therapist manually corrects the walking pattern and, with the help of the sensors, create a model of the patient’s walking pattern”, he says.
In the next mode, the system adjusts the walking pattern to the defined model. New adjustments are made and are used to improve optimisation of the walking pattern.
“The patient wears an EEG cap which measures brain activity”, says Liverud. “By using these signals combined with input from other physiological and system sensors, the robotic system registers whether the patient wants to stop, change speed or turn, and can adapt immediately”, he says. “The robot continues to correct any walking pattern errors. However, since it also allows the patient the freedom to decide where and how he or she walks, the patient experiences control and keeps motivation to continue with the training”, says Liverud.
Working with EuropeThe European researchers have now completed specification of the system and its components, and construction of the robot is underway.Construction involves a large team. The University of Bremen is heading the project and developing the architecture to integrate all system modules, and German wheelchair, orthosis and robotics experts are constructing the mechanical components, while two UK universities are working with cognitive aspects. Spanish specialists are addressing brain activity measurements and the University of Brussels is looking into robot control. SINTEF is working with the sensors and the final functional integration of the system. In a year’s time construction will be completed and the robot will be tested on stroke patients at rehabilitation institutes in Slovenia and Germany. The CORBYS project has a total budget of EUR 8.7 million.

Gets stroke patients back on their feet

A robot is now being built to help stroke patients with training, motivation and walking.

In Europe, strokes are the most common cause of physical disability among the elderly. This often result in paralysis of one side of the body, and many patients suffer much reduced physical mobility and are often unable to walk on their own. These are the hard facts the EU project CORBYS has taken seriously. Researchers in six countries are currently developing a robotic system designed to help stroke patients re-train their bodies. The concept is based on helping the patient by constructing a system consisting of powered orthosis to help patient in moving his/her legs and a mobile platform providing patient mobility.

The CORBYS researchers are also working with the cognitive aspects. The aim is to enable the robot to interpret data from the patient and adapt the training programme to his or her capabilities and intention. This will bring rehabilitation robots to the next level.

Back to walking normally
It is vital to get stroke patients up on their feet as soon as possible. They must have frequent training exercises, and re-learn how to walk so that they can function as good as possible on their own.
Why a robot? “Absolutely, because it is difficult to meet these requirements using today’s work-intensive manual method where two therapists assisting the patient by lifting one leg after the other”, says ICT researcher Anders Liverud at SINTEF, which is one of the CORBYS project partners.

Robot-patient learning
CORBYS involves the use of physiological data such as heart rate, temperature and muscle activity measurements to provide feedback to the therapist and help control the robot. Do the patient’s legs always go where the patient want? Is the patient getting tired and stressed?

“The walking robot has several settings, and the therapist selects the correct mode based on how far the patient has come in his or her rehabilitation”, says Liverud. “The first step is to attach sensors to the patient’s body and let them walk on a treadmill. A therapist manually corrects the walking pattern and, with the help of the sensors, create a model of the patient’s walking pattern”, he says.

In the next mode, the system adjusts the walking pattern to the defined model. New adjustments are made and are used to improve optimisation of the walking pattern.

“The patient wears an EEG cap which measures brain activity”, says Liverud. “By using these signals combined with input from other physiological and system sensors, the robotic system registers whether the patient wants to stop, change speed or turn, and can adapt immediately”, he says. “The robot continues to correct any walking pattern errors. However, since it also allows the patient the freedom to decide where and how he or she walks, the patient experiences control and keeps motivation to continue with the training”, says Liverud.

Working with Europe
The European researchers have now completed specification of the system and its components, and construction of the robot is underway.
Construction involves a large team. The University of Bremen is heading the project and developing the architecture to integrate all system modules, and German wheelchair, orthosis and robotics experts are constructing the mechanical components, while two UK universities are working with cognitive aspects. Spanish specialists are addressing brain activity measurements and the University of Brussels is looking into robot control. SINTEF is working with the sensors and the final functional integration of the system. In a year’s time construction will be completed and the robot will be tested on stroke patients at rehabilitation institutes in Slovenia and Germany. The CORBYS project has a total budget of EUR 8.7 million.

Filed under robots robotics stroke rehabilitation muscle activity brain activity neuroscience science

103 notes

Sleep Discovery Could Lead to Therapies That Improve Memory
A team of sleep researchers led by UC Riverside psychologist Sara C. Mednick has confirmed the mechanism that enables the brain to consolidate memory and found that a commonly prescribed sleep aid enhances the process. Those discoveries could lead to new sleep therapies that will improve memory for aging adults and those with dementia, Alzheimer’s and schizophrenia.
The groundbreaking research appears in a paper, “The Critical Role of Sleep Spindles in Hippocampal-Dependent Memory: A Pharmacology Study,” published in the Journal of Neuroscience.
Earlier research found a correlation between sleep spindles — bursts of brain activity that last for a second or less during a specific stage of sleep — and consolidation of memories that depend on the hippocampus. The hippocampus, part of the cerebral cortex, is important in the consolidation of information from short-term to long-term memory, and spatial navigation. The hippocampus is one of the first regions of the brain damaged by Alzheimer’s disease.
Mednick and her research team demonstrated, for the first time, the critical role that sleep spindles play in consolidating memory in the hippocampus, and they showed that pharmaceuticals could significantly improve that process, far more than sleep alone.
In addition to Mednick the research team includes: Elizabeth A. McDevitt, UC San Diego; James K. Walsh, VA San Diego Healthcare System, La Jolla, Calif; Erin Wamsley, St. Luke’s Hospital, St. Louis, Mo.; Martin Paulus, Stanford University; Jennifer C. Kanady, Harvard Medical School; and Sean P.A. Drummond, UC Berkeley.
“We found that a very common sleep drug can be used to increase verbal memory,” said Mednick, the lead author of the paper that outlines results of two studies conducted over five years with a $651,999 research grant from the National Institutes of Health. “This is the first study to show you can manipulate sleep to improve memory. It suggests sleep drugs could be a powerful tool to tailor sleep to particular memory disorders.”
(Image credit)

Sleep Discovery Could Lead to Therapies That Improve Memory

A team of sleep researchers led by UC Riverside psychologist Sara C. Mednick has confirmed the mechanism that enables the brain to consolidate memory and found that a commonly prescribed sleep aid enhances the process. Those discoveries could lead to new sleep therapies that will improve memory for aging adults and those with dementia, Alzheimer’s and schizophrenia.

The groundbreaking research appears in a paper, “The Critical Role of Sleep Spindles in Hippocampal-Dependent Memory: A Pharmacology Study,” published in the Journal of Neuroscience.

Earlier research found a correlation between sleep spindles — bursts of brain activity that last for a second or less during a specific stage of sleep — and consolidation of memories that depend on the hippocampus. The hippocampus, part of the cerebral cortex, is important in the consolidation of information from short-term to long-term memory, and spatial navigation. The hippocampus is one of the first regions of the brain damaged by Alzheimer’s disease.

Mednick and her research team demonstrated, for the first time, the critical role that sleep spindles play in consolidating memory in the hippocampus, and they showed that pharmaceuticals could significantly improve that process, far more than sleep alone.

In addition to Mednick the research team includes: Elizabeth A. McDevitt, UC San Diego; James K. Walsh, VA San Diego Healthcare System, La Jolla, Calif; Erin Wamsley, St. Luke’s Hospital, St. Louis, Mo.; Martin Paulus, Stanford University; Jennifer C. Kanady, Harvard Medical School; and Sean P.A. Drummond, UC Berkeley.

“We found that a very common sleep drug can be used to increase verbal memory,” said Mednick, the lead author of the paper that outlines results of two studies conducted over five years with a $651,999 research grant from the National Institutes of Health. “This is the first study to show you can manipulate sleep to improve memory. It suggests sleep drugs could be a powerful tool to tailor sleep to particular memory disorders.”

(Image credit)

Filed under memory alzheimer's disease brain activity memory consolidation sleep neuroscience science

94 notes

The Brain Activity Map
Researchers explain the goals and structure of a new brain-mapping project
A proposed effort to map brain activity on a large scale, expected to be announced by the White House later this month, could help neuroscientists understand the origins of cognition, perception, and other phenomena. These brain activities haven’t been well understood to date, in part because they arise from the interaction of large sets of neurons whose coördinated efforts scientists cannot currently track.
“There are all kinds of remarkable tools to study the microscopic world of individual cells,” says John Donoghue, a neuroscientist at Brown and a participant in the project. “And on the macroscopic end, we have tools like MRI and EEG that tell us about the function of the brain and its structure, but at a low resolution. There is a gap in the middle. We need to record many, many neurons exactly as they operate with temporal precision and in large areas,” he says.
An article published Thursday in Science online expands the project’s already ambitious goals beyond just recording the activity of all individual neurons in a brain circuit simultaneously. Researchers should also find ways to manipulate the neurons within those circuits and understand circuit function through new methods of data analysis and modeling, the authors write.
Understanding how neurons communicate with one another across large regions of the brain will be critical to understanding how the brain works, according to participants in the project. Other efforts to map out the physical connections in the brain are already under way (see “TR10: Connectomics” and “Mapping the Brain on a Massive Scale”), but these projects look at static brains or can only get a rough view of how regions of the brain communicate. The new project will probably start applying its novel and yet unknown technologies on simpler brains, such as those of flies, and will probably take decades to achieve its goals.
Numerous leaders from the fields of neuroscience, nanotechnology, and synthetic biology are expected to collaborate on the effort. “We need something large scale to try to build tools for the future,” says Rafael Yuste, a neurobiologist at Columbia University and a member of the project. “We view ourselves as tool builders. I think we could provide to the scientific community the methods that could be used for the next stage in neuroscience.”
In addition to deepening fundamental understanding of the brain, the project may also lead to new treatments for psychiatric and neurological disorders. “If we truly understand how things like thoughts, cognition, and other features of the brain emerge, then we should have a better understanding of mood disorders, Parkinson’s, epilepsy and other conditions that are thought to arise from brain-wide circuitry problems,” says Donoghue.
Details about which technology ideas will be given the green light and how much money will support their development are expected to be revealed in the White House announcement that is still to come. The project is likely to be supported by the National Institutes of Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Science and Technology Policy, and private foundations, participants say. It’s not yet clear how much money will be needed or which technologies will be given priority.
Whichever particular technologies emerge, nanotechnology is likely to be involved, in part because of the need for smaller and faster sensors to record neuronal activity across the brain. Existing sensors can record the electrical activity of neurons, but these chips can typically monitor fewer than 100 neurons at a time and can’t record activity from neighboring neurons, which would be necessary to understand how neurons interact with one another. Paul Weiss, director of the California NanoSystems Institute at the University of California, Los Angeles, a participant in the project, says that nanofabrication techniques could address this problem, with smaller chips bearing smaller electrical and even chemical probes. “We’ve had over a decade a fairly substantial investment in science and technology to develop the capability … to control how what we make interacts with the chemical, physical, and biological worlds,” he says.
Novel optical techniques could also aid the mapping project. Currently, many research groups use calcium-sensitive fluorescent dyes to study neuron firing, but Yuste wants to develop an optical technique that uses voltage-sensitive fluorescent dyes for a faster readout. “Neurons communicate using voltage,” he says. “We would like to develop voltage imaging so we will be able to measure neuronal activity directly.”
While many things about the project are uncertain, one thing is clear—there is going to be a lot of data to store, share, and analyze. “We have just begun to scratch the surface of how you deal with data in high-dimensional spaces,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute. “If you are talking about one million neurons, no one can even imagine what that looks like–it is way beyond what we can perceive in three dimensions.”
The Science article also sketches out a rough time line. Within five years, it should be possible to monitor tens of thousands of neurons; in 15 years, one million neurons should be possible. A fly’s brain has about 100,000 neurons, a mouse’s about 75 million, and a human’s about 85 billion. “With one million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse,” the authors write.

The Brain Activity Map

Researchers explain the goals and structure of a new brain-mapping project

A proposed effort to map brain activity on a large scale, expected to be announced by the White House later this month, could help neuroscientists understand the origins of cognition, perception, and other phenomena. These brain activities haven’t been well understood to date, in part because they arise from the interaction of large sets of neurons whose coördinated efforts scientists cannot currently track.

“There are all kinds of remarkable tools to study the microscopic world of individual cells,” says John Donoghue, a neuroscientist at Brown and a participant in the project. “And on the macroscopic end, we have tools like MRI and EEG that tell us about the function of the brain and its structure, but at a low resolution. There is a gap in the middle. We need to record many, many neurons exactly as they operate with temporal precision and in large areas,” he says.

An article published Thursday in Science online expands the project’s already ambitious goals beyond just recording the activity of all individual neurons in a brain circuit simultaneously. Researchers should also find ways to manipulate the neurons within those circuits and understand circuit function through new methods of data analysis and modeling, the authors write.

Understanding how neurons communicate with one another across large regions of the brain will be critical to understanding how the brain works, according to participants in the project. Other efforts to map out the physical connections in the brain are already under way (see “TR10: Connectomics” and “Mapping the Brain on a Massive Scale”), but these projects look at static brains or can only get a rough view of how regions of the brain communicate. The new project will probably start applying its novel and yet unknown technologies on simpler brains, such as those of flies, and will probably take decades to achieve its goals.

Numerous leaders from the fields of neuroscience, nanotechnology, and synthetic biology are expected to collaborate on the effort. “We need something large scale to try to build tools for the future,” says Rafael Yuste, a neurobiologist at Columbia University and a member of the project. “We view ourselves as tool builders. I think we could provide to the scientific community the methods that could be used for the next stage in neuroscience.”

In addition to deepening fundamental understanding of the brain, the project may also lead to new treatments for psychiatric and neurological disorders. “If we truly understand how things like thoughts, cognition, and other features of the brain emerge, then we should have a better understanding of mood disorders, Parkinson’s, epilepsy and other conditions that are thought to arise from brain-wide circuitry problems,” says Donoghue.

Details about which technology ideas will be given the green light and how much money will support their development are expected to be revealed in the White House announcement that is still to come. The project is likely to be supported by the National Institutes of Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Science and Technology Policy, and private foundations, participants say. It’s not yet clear how much money will be needed or which technologies will be given priority.

Whichever particular technologies emerge, nanotechnology is likely to be involved, in part because of the need for smaller and faster sensors to record neuronal activity across the brain. Existing sensors can record the electrical activity of neurons, but these chips can typically monitor fewer than 100 neurons at a time and can’t record activity from neighboring neurons, which would be necessary to understand how neurons interact with one another. Paul Weiss, director of the California NanoSystems Institute at the University of California, Los Angeles, a participant in the project, says that nanofabrication techniques could address this problem, with smaller chips bearing smaller electrical and even chemical probes. “We’ve had over a decade a fairly substantial investment in science and technology to develop the capability … to control how what we make interacts with the chemical, physical, and biological worlds,” he says.

Novel optical techniques could also aid the mapping project. Currently, many research groups use calcium-sensitive fluorescent dyes to study neuron firing, but Yuste wants to develop an optical technique that uses voltage-sensitive fluorescent dyes for a faster readout. “Neurons communicate using voltage,” he says. “We would like to develop voltage imaging so we will be able to measure neuronal activity directly.”

While many things about the project are uncertain, one thing is clear—there is going to be a lot of data to store, share, and analyze. “We have just begun to scratch the surface of how you deal with data in high-dimensional spaces,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute. “If you are talking about one million neurons, no one can even imagine what that looks like–it is way beyond what we can perceive in three dimensions.”

The Science article also sketches out a rough time line. Within five years, it should be possible to monitor tens of thousands of neurons; in 15 years, one million neurons should be possible. A fly’s brain has about 100,000 neurons, a mouse’s about 75 million, and a human’s about 85 billion. “With one million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse,” the authors write.

Filed under brain brain activity Brain Activity Map brain-mapping neuroimaging technology neuroscience science

84 notes

Stanford psychologists uncover brain-imaging inaccuracies
Pictures of brain regions “activating” are by now a familiar accompaniment to any neurological news story. With functional magnetic resonance imaging, or fMRI, you can see specific brain regions light up, standing out against the background like night owls’ apartment windows.
It’s easy to forget that these brain images aren’t real snapshots of brain activity. Instead, each picture is the result of many layers of analysis and interpretation, far removed from raw data.
"It’s just one representation of brain activity," said Matthew Sacchet, a PhD student in the Neurosciences Program at the Stanford School of Medicine. "As you process the data, it can change."
Sacchet works in the lab of Stanford psychology Associate Professor Brian Knutson, who studies reward processing in a small area of the brain known as the nucleus accumbens. Precisely how that structure activates is at the heart of an ongoing debate about reward circuits – a subject that holds relevance for our understanding of everything from addiction to financial risk-taking.
Unfortunately, according to a paper from Knutson and Sacchet, hundreds of research papers on this circuit may be unintentionally biased. When the labs processed their fMRI findings, many used a one-size-fits-all strategy that skewed which regions of the brain appeared to be activating.
"I honestly think most people want good data," said Knutson. "I’m excited that we can make this kind of research more rigorous."
The paper appeared in the journal NeuroImage.
Too much smoothing
Functional magnetic resonance imaging measures changes in blood flow in the brain. It’s a powerful tool, but the signal fMRI actually detects – the result of the magnetic differences between oxygenated and deoxygenated blood – is noisy.
Researchers need to statistically process the data in order to make the resulting data interpretable. One of the most common approaches is known as “spatial smoothing,” which involves averaging the activity of each brain region with that of its neighbors.
But fMRI has only been in use since the mid-1990s. Many of the most common analyses in use today are holdovers from older, lower-resolution types of imaging and seem to have some undesired effects on the finer-grained signals fMRI can provide.
Knutson and Sacchet found that when researchers process fMRI data with a traditional “smoothing kernel” of 8mm, they end up averaging their images over too large an area. Activity in smaller brain structures can then be overlooked, or even shifted to areas that receive more blood flow and where the blood oxygenation level-dependent signal is stronger.
"It might seem strange that a systematic bias like that could bias the whole field," Knutson said. "But if half the people use 8mm and half use 4mm, you might end up with very different results, and it could add up."
Reward structure
These statistical pitfalls are particularly glaring when studying the small, structurally complex nucleus accumbens.
Findings from the Knutson Lab, which has been using the smaller, 4mm smoothing kernel for years, suggest that different parts of the nucleus accumbens have different functions. The forward portion seems to distinguish between positive or negative stimuli, reacting specifically to rewards. Meanwhile, the rear section responds more to the intensity of the motivation.
While some other labs have corroborated this finding, others only found activation in the rear half of the structure.
These contradictory findings now appear to have been skewed. Because the back of the nucleus accumbens is larger and surrounded by more blood-infused gray matter than the front, the smoothing step made it appear as if all the nucleus accumbens’ activity originated far to the rear.
A collaborator in Germany already has taken the paper’s advice, Sacchet said. “She had a colleague reanalyze her data and found the same thing we found.”
Knutson emphasized that the research paper doesn’t mean “the methods are bunk.” Simply improving the way scientists process signals can enhance their ability to locate specific brain functions.
"There may be a debate, but you can resolve that debate with data," he said.

Stanford psychologists uncover brain-imaging inaccuracies

Pictures of brain regions “activating” are by now a familiar accompaniment to any neurological news story. With functional magnetic resonance imaging, or fMRI, you can see specific brain regions light up, standing out against the background like night owls’ apartment windows.

It’s easy to forget that these brain images aren’t real snapshots of brain activity. Instead, each picture is the result of many layers of analysis and interpretation, far removed from raw data.

"It’s just one representation of brain activity," said Matthew Sacchet, a PhD student in the Neurosciences Program at the Stanford School of Medicine. "As you process the data, it can change."

Sacchet works in the lab of Stanford psychology Associate Professor Brian Knutson, who studies reward processing in a small area of the brain known as the nucleus accumbens. Precisely how that structure activates is at the heart of an ongoing debate about reward circuits – a subject that holds relevance for our understanding of everything from addiction to financial risk-taking.

Unfortunately, according to a paper from Knutson and Sacchet, hundreds of research papers on this circuit may be unintentionally biased. When the labs processed their fMRI findings, many used a one-size-fits-all strategy that skewed which regions of the brain appeared to be activating.

"I honestly think most people want good data," said Knutson. "I’m excited that we can make this kind of research more rigorous."

The paper appeared in the journal NeuroImage.

Too much smoothing

Functional magnetic resonance imaging measures changes in blood flow in the brain. It’s a powerful tool, but the signal fMRI actually detects – the result of the magnetic differences between oxygenated and deoxygenated blood – is noisy.

Researchers need to statistically process the data in order to make the resulting data interpretable. One of the most common approaches is known as “spatial smoothing,” which involves averaging the activity of each brain region with that of its neighbors.

But fMRI has only been in use since the mid-1990s. Many of the most common analyses in use today are holdovers from older, lower-resolution types of imaging and seem to have some undesired effects on the finer-grained signals fMRI can provide.

Knutson and Sacchet found that when researchers process fMRI data with a traditional “smoothing kernel” of 8mm, they end up averaging their images over too large an area. Activity in smaller brain structures can then be overlooked, or even shifted to areas that receive more blood flow and where the blood oxygenation level-dependent signal is stronger.

"It might seem strange that a systematic bias like that could bias the whole field," Knutson said. "But if half the people use 8mm and half use 4mm, you might end up with very different results, and it could add up."

Reward structure

These statistical pitfalls are particularly glaring when studying the small, structurally complex nucleus accumbens.

Findings from the Knutson Lab, which has been using the smaller, 4mm smoothing kernel for years, suggest that different parts of the nucleus accumbens have different functions. The forward portion seems to distinguish between positive or negative stimuli, reacting specifically to rewards. Meanwhile, the rear section responds more to the intensity of the motivation.

While some other labs have corroborated this finding, others only found activation in the rear half of the structure.

These contradictory findings now appear to have been skewed. Because the back of the nucleus accumbens is larger and surrounded by more blood-infused gray matter than the front, the smoothing step made it appear as if all the nucleus accumbens’ activity originated far to the rear.

A collaborator in Germany already has taken the paper’s advice, Sacchet said. “She had a colleague reanalyze her data and found the same thing we found.”

Knutson emphasized that the research paper doesn’t mean “the methods are bunk.” Simply improving the way scientists process signals can enhance their ability to locate specific brain functions.

"There may be a debate, but you can resolve that debate with data," he said.

Filed under neuroimaging brain brain activity blood flow nucleus accumbens fMRI neuroscience science

107 notes

One region, two functions: Brain cells’ multitasking may be a key to understanding overall brain function
A region of the brain known to play a key role in visual and spatial processing has a parallel function: sorting visual information into categories, according to a new study by researchers at the University of Chicago.
Primates are known to have a remarkable ability to place visual stimuli into familiar and meaningful categories, such as fruit or vegetables. They can also direct their spatial attention to different locations in a scene and make spatially-targeted movements, such as reaching.
The study, published in the March issue of Neuron, shows that these very different types of information can be simultaneously encoded within the posterior parietal cortex. The research brings scientists a step closer to understanding how the brain interprets visual stimuli and solves complex tasks.
“We found that multiple functions can be mapped onto a particular region of the brain and even onto individual brain cells in that region,” said study author David Freedman, PhD, assistant professor of neurobiology at the University of Chicago. “These functions overlap. This particular brain area, even its individual neurons, can independently encode both spatial and cognitive signals.”
Freedman studies the effects of learning on the brain and how information is stored in short-term memory, with a focus on the areas that process visual stimuli. To examine this phenomenon, he has taught monkeys to play a simple video game in which they learn to assign moving visual patterns into categories.
“The task is a bit like a baseball umpire calling balls and strikes,” he said, “since the monkeys have to sort the various motion patterns into two groups, or categories.” 
The monkeys master the tasks over a few weeks of training. Once they do, the researchers record electrical signals from parietal lobe neurons while the subjects perform the categorization task. By measuring electrical activity patterns of these neurons, the researchers can decode the information conveyed by the neurons’ activity.
“The activity patterns in these parietal neurons carry strong information about the category that each motion pattern gets assigned to during the task,” Freedman said.
(Image: Thinkstock)

One region, two functions: Brain cells’ multitasking may be a key to understanding overall brain function

A region of the brain known to play a key role in visual and spatial processing has a parallel function: sorting visual information into categories, according to a new study by researchers at the University of Chicago.

Primates are known to have a remarkable ability to place visual stimuli into familiar and meaningful categories, such as fruit or vegetables. They can also direct their spatial attention to different locations in a scene and make spatially-targeted movements, such as reaching.

The study, published in the March issue of Neuron, shows that these very different types of information can be simultaneously encoded within the posterior parietal cortex. The research brings scientists a step closer to understanding how the brain interprets visual stimuli and solves complex tasks.

“We found that multiple functions can be mapped onto a particular region of the brain and even onto individual brain cells in that region,” said study author David Freedman, PhD, assistant professor of neurobiology at the University of Chicago. “These functions overlap. This particular brain area, even its individual neurons, can independently encode both spatial and cognitive signals.”

Freedman studies the effects of learning on the brain and how information is stored in short-term memory, with a focus on the areas that process visual stimuli. To examine this phenomenon, he has taught monkeys to play a simple video game in which they learn to assign moving visual patterns into categories.

“The task is a bit like a baseball umpire calling balls and strikes,” he said, “since the monkeys have to sort the various motion patterns into two groups, or categories.”

The monkeys master the tasks over a few weeks of training. Once they do, the researchers record electrical signals from parietal lobe neurons while the subjects perform the categorization task. By measuring electrical activity patterns of these neurons, the researchers can decode the information conveyed by the neurons’ activity.

“The activity patterns in these parietal neurons carry strong information about the category that each motion pattern gets assigned to during the task,” Freedman said.

(Image: Thinkstock)

Filed under brain brain regions brain activity brain function multitasking parietal cortex neuroscience science

410 notes

Mental picture of others can be seen using fMRI
It is possible to tell who a person is thinking about by analyzing images of his or her brain. Our mental models of people produce unique patterns of brain activation, which can be detected using advanced imaging techniques according to a study by Cornell University neuroscientist Nathan Spreng and his colleagues.
"When we looked at our data, we were shocked that we could successfully decode who our participants were thinking about based on their brain activity," said Spreng, assistant professor of human development in Cornell’s College of Human Ecology.
Understanding and predicting the behavior of others is a key to successfully navigating the social world, yet little is known about how the brain actually models the enduring personality traits that may drive others’ behavior, the authors say. Such ability allows us to anticipate how someone will act in a situation that may not have happened before.
To learn more, the researchers asked 19 young adults to learn about the personalities of four people who differed on key personality traits. Participants were given different scenarios (i.e. sitting on a bus when an elderly person gets on and there are no seats) and asked to imagine how a specified person would respond. During the task, their brains were scanned using functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow.
They found that different patterns of brain activity in the medial prefrontal cortex (mPFC) were associated with each of the four different personalities. In other words, which person was being imagined could be accurately identified based solely on the brain activation pattern.
The results suggest that the brain codes the personality traits of others in distinct brain regions and this information is integrated in the medial prefrontal cortex (mPFC) to produce an overall personality model used to plan social interactions, the authors say.
"Prior research has implicated the anterior mPFC in social cognition disorders such as autism and our results suggest people with such disorders may have an inability to build accurate personality models," said Spreng. "If further research bears this out, we may ultimately be able to identify specific brain activation biomarkers not only for diagnosing such diseases, but for monitoring the effects of interventions."

Mental picture of others can be seen using fMRI

It is possible to tell who a person is thinking about by analyzing images of his or her brain. Our mental models of people produce unique patterns of brain activation, which can be detected using advanced imaging techniques according to a study by Cornell University neuroscientist Nathan Spreng and his colleagues.

"When we looked at our data, we were shocked that we could successfully decode who our participants were thinking about based on their brain activity," said Spreng, assistant professor of human development in Cornell’s College of Human Ecology.

Understanding and predicting the behavior of others is a key to successfully navigating the social world, yet little is known about how the brain actually models the enduring personality traits that may drive others’ behavior, the authors say. Such ability allows us to anticipate how someone will act in a situation that may not have happened before.

To learn more, the researchers asked 19 young adults to learn about the personalities of four people who differed on key personality traits. Participants were given different scenarios (i.e. sitting on a bus when an elderly person gets on and there are no seats) and asked to imagine how a specified person would respond. During the task, their brains were scanned using functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow.

They found that different patterns of brain activity in the medial prefrontal cortex (mPFC) were associated with each of the four different personalities. In other words, which person was being imagined could be accurately identified based solely on the brain activation pattern.

The results suggest that the brain codes the personality traits of others in distinct brain regions and this information is integrated in the medial prefrontal cortex (mPFC) to produce an overall personality model used to plan social interactions, the authors say.

"Prior research has implicated the anterior mPFC in social cognition disorders such as autism and our results suggest people with such disorders may have an inability to build accurate personality models," said Spreng. "If further research bears this out, we may ultimately be able to identify specific brain activation biomarkers not only for diagnosing such diseases, but for monitoring the effects of interventions."

Filed under brain brain activity mental models neuroimaging medial prefrontal cortex neuroscience science

77 notes

How the brain loses and regains consciousness
Study reveals brain patterns produced by a general anesthesia drug; work could help doctors better monitor patients.
Since the mid-1800s, doctors have used drugs to induce general anesthesia in patients undergoing surgery. Despite their widespread use, little is known about how these drugs create such a profound loss of consciousness.
In a new study that tracked brain activity in human volunteers over a two-hour period as they lost and regained consciousness, researchers from MIT and Massachusetts General Hospital (MGH) have identified distinctive brain patterns associated with different stages of general anesthesia. The findings shed light on how one commonly used anesthesia drug exerts its effects, and could help doctors better monitor patients during surgery and prevent rare cases of patients waking up during operations.
Anesthesiologists now rely on a monitoring system that takes electroencephalogram (EEG) information and combines it into a single number between zero and 100. However, that index actually obscures the information that would be most useful, according to the authors of the new study, which appears in the Proceedings of the National Academy of Sciences the week of March 4.
“When anesthesiologists are taking care of someone in the operating room, they can use the information in this article to make sure that someone is unconscious, and they can have a specific idea of when the person may be regaining consciousness,” says senior author Emery Brown, an MIT professor of brain and cognitive sciences and health sciences and technology and an anesthesiologist at MGH.

How the brain loses and regains consciousness

Study reveals brain patterns produced by a general anesthesia drug; work could help doctors better monitor patients.

Since the mid-1800s, doctors have used drugs to induce general anesthesia in patients undergoing surgery. Despite their widespread use, little is known about how these drugs create such a profound loss of consciousness.

In a new study that tracked brain activity in human volunteers over a two-hour period as they lost and regained consciousness, researchers from MIT and Massachusetts General Hospital (MGH) have identified distinctive brain patterns associated with different stages of general anesthesia. The findings shed light on how one commonly used anesthesia drug exerts its effects, and could help doctors better monitor patients during surgery and prevent rare cases of patients waking up during operations.

Anesthesiologists now rely on a monitoring system that takes electroencephalogram (EEG) information and combines it into a single number between zero and 100. However, that index actually obscures the information that would be most useful, according to the authors of the new study, which appears in the Proceedings of the National Academy of Sciences the week of March 4.

“When anesthesiologists are taking care of someone in the operating room, they can use the information in this article to make sure that someone is unconscious, and they can have a specific idea of when the person may be regaining consciousness,” says senior author Emery Brown, an MIT professor of brain and cognitive sciences and health sciences and technology and an anesthesiologist at MGH.

Filed under anesthesia brain consciousness brain activity EEG neuroscience science

160 notes

Research advances understanding of the human brain
Advanced neuroimaging techniques are giving researchers new insight into how the human brain plans and controls limb movements. This advance could one day lead to new understanding of disease and dysfunction in the brain and has important implications for movement-impaired patient populations, like those who suffer from spinal cord injuries.
Randy Flanagan (Psychology and Centre for Neuroscience Studies), working with colleagues at Western University, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain are used to plan hand actions with the left and right arm. This study, spearheaded by Jason Gallivan, a Banting postdoctoral fellow at Queen’s found that by using the fMRI signals from several different brain regions, they could predict the limb to be used (left vs. right) and hand action to be performed (grasping vs. touching an object), moments before that movement is actually executed.
“We are trying to understand how the brain plans actions,” says Dr. Gallivan. “By using highly sensitive analysis techniques that enable the detection of subtle changes in brain activity patterns, we can reveal which of a series of actions a volunteer is merely intending to do, seconds later. Mapping and characterizing these predictive signals across the human brain allows us to pinpoint the key brain structures involved in generating normal, everyday behaviours.”
In another study, Dr. Flanagan and doctoral student Jonathan Diamond examined how the brain learns object mechanical properties, knowledge that is essential for skilled manipulation. They found that, through experience, humans use mismatches between predicted and actual fingertip forces and between predicted and actual object motions to build internal representations, or models, of the mechanical properties of the objects.
“The goal of this work is to understand the representations underlying skilled manipulation,” explains Dr. Flanagan. “This is important because it will enable us to better characterize deficits in manipulation tasks that often result from stroke and neurological diseases.”
Dr. Flanagan, Dr. Gallivan, and Ingrid Johnsrude (Psychology and Centre for Neuroscience Studies) have recently been awarded a CIHR operating grant to support ongoing neuroimaging work.
Both research papers were published in the Journal of Neuroscience. Read Dr. Flanagan’s paper here and read the joint paper here.
(Image: Getty Images)

Research advances understanding of the human brain

Advanced neuroimaging techniques are giving researchers new insight into how the human brain plans and controls limb movements. This advance could one day lead to new understanding of disease and dysfunction in the brain and has important implications for movement-impaired patient populations, like those who suffer from spinal cord injuries.

Randy Flanagan (Psychology and Centre for Neuroscience Studies), working with colleagues at Western University, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain are used to plan hand actions with the left and right arm. This study, spearheaded by Jason Gallivan, a Banting postdoctoral fellow at Queen’s found that by using the fMRI signals from several different brain regions, they could predict the limb to be used (left vs. right) and hand action to be performed (grasping vs. touching an object), moments before that movement is actually executed.

“We are trying to understand how the brain plans actions,” says Dr. Gallivan. “By using highly sensitive analysis techniques that enable the detection of subtle changes in brain activity patterns, we can reveal which of a series of actions a volunteer is merely intending to do, seconds later. Mapping and characterizing these predictive signals across the human brain allows us to pinpoint the key brain structures involved in generating normal, everyday behaviours.”

In another study, Dr. Flanagan and doctoral student Jonathan Diamond examined how the brain learns object mechanical properties, knowledge that is essential for skilled manipulation. They found that, through experience, humans use mismatches between predicted and actual fingertip forces and between predicted and actual object motions to build internal representations, or models, of the mechanical properties of the objects.

“The goal of this work is to understand the representations underlying skilled manipulation,” explains Dr. Flanagan. “This is important because it will enable us to better characterize deficits in manipulation tasks that often result from stroke and neurological diseases.”

Dr. Flanagan, Dr. Gallivan, and Ingrid Johnsrude (Psychology and Centre for Neuroscience Studies) have recently been awarded a CIHR operating grant to support ongoing neuroimaging work.

Both research papers were published in the Journal of Neuroscience. Read Dr. Flanagan’s paper here and read the joint paper here.

(Image: Getty Images)

Filed under brain spinal cord injuries neuroimaging brain activity limb movements neuroscience science

123 notes

Changes in patterns of brain activity predict fear memory formation
Psychologists at the University of Amsterdam (UvA) have discovered that changes in patterns of brain activity during fearful experiences predict whether a long-term fear memory is formed. The research results have recently been published in the prestigious scientific journal ‘Nature Neuroscience’.
Researchers Renee Visser MSc, Dr Steven Scholte, Tinka Beemsterboer MSc and Prof. Merel Kindt discovered that they can predict future fear memories by looking at patterns of brain activity during fearful experiences. Up until now, there was no way of predicting fear memory. It was also, above all, unclear whether the selection of information to be stored in the long-term memory occurred at the time of fear learning or after the event.
Picture predicts pain stimulus
During magnetic resonance brain imaging (MRI), participants saw neutral pictures of faces and houses, some of which were followed by a small electric shock. In this way, the participants formed fear memories. They showed fear responses when the pictures were shown that were paired with shocks. This fear response can be measured in the brain, but is also evident from increased pupil dilation when someone sees the picture. After a few weeks, the participants returned to the lab and were shown the same images. Brain activity and pupil diameter were once again measured. The extent to which the pupil dilated when seeing the images that were previously followed by a shock, was considered an expression of the previously formed fear memory.
Pattern Analysis
In order to analyse the fMRI data, (spatial) patterns of brain activity (Multi-Voxel Pattern Analysis, or MVPA) were analysed. By correlating patterns of various stimulus presentations with each other, it is possible to measure the extent to which the representation of two stimuli is the same. It appears that images that have nothing in common, such as houses and faces, lead to increasing neural pattern similarity when they predict danger. This does not occur when they do not predict danger. This leads to the formation of stronger fear responses. The extent to which this occurs is an indication of fear memory formation: the stronger the response during learning, the stronger the fear response will be in the long term.
These findings may lead to greater insights into the formation of emotional memory. As a result, it is possible to conduct experimental research into the mechanisms that strengthen, weaken or even erase fear memory in a more direct fashion, without having to wait until the fear memory is expressed.

Changes in patterns of brain activity predict fear memory formation

Psychologists at the University of Amsterdam (UvA) have discovered that changes in patterns of brain activity during fearful experiences predict whether a long-term fear memory is formed. The research results have recently been published in the prestigious scientific journal ‘Nature Neuroscience’.

Researchers Renee Visser MSc, Dr Steven Scholte, Tinka Beemsterboer MSc and Prof. Merel Kindt discovered that they can predict future fear memories by looking at patterns of brain activity during fearful experiences. Up until now, there was no way of predicting fear memory. It was also, above all, unclear whether the selection of information to be stored in the long-term memory occurred at the time of fear learning or after the event.

Picture predicts pain stimulus

During magnetic resonance brain imaging (MRI), participants saw neutral pictures of faces and houses, some of which were followed by a small electric shock. In this way, the participants formed fear memories. They showed fear responses when the pictures were shown that were paired with shocks. This fear response can be measured in the brain, but is also evident from increased pupil dilation when someone sees the picture. After a few weeks, the participants returned to the lab and were shown the same images. Brain activity and pupil diameter were once again measured. The extent to which the pupil dilated when seeing the images that were previously followed by a shock, was considered an expression of the previously formed fear memory.

Pattern Analysis

In order to analyse the fMRI data, (spatial) patterns of brain activity (Multi-Voxel Pattern Analysis, or MVPA) were analysed. By correlating patterns of various stimulus presentations with each other, it is possible to measure the extent to which the representation of two stimuli is the same. It appears that images that have nothing in common, such as houses and faces, lead to increasing neural pattern similarity when they predict danger. This does not occur when they do not predict danger. This leads to the formation of stronger fear responses. The extent to which this occurs is an indication of fear memory formation: the stronger the response during learning, the stronger the fear response will be in the long term.

These findings may lead to greater insights into the formation of emotional memory. As a result, it is possible to conduct experimental research into the mechanisms that strengthen, weaken or even erase fear memory in a more direct fashion, without having to wait until the fear memory is expressed.

Filed under brain activity fear memory memory formation fear memory psychology neuroscience science

152 notes

Brain-to-brain interface allows transmission of tactile and motor information between rats
Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.
The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.
"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
Read more

Brain-to-brain interface allows transmission of tactile and motor information between rats

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

Read more

Filed under brain activity electrical stimulation cortex behavioral decision neuroscience science

free counters