Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroimaging

287 notes

The science of magic: it’s not all hocus pocus 
Think of your favourite magic trick. Is it as grandiose as David Copperfield’s Death Saw, or is it as simple as making a coin disappear in front of your very eyes?
These two very different tricks have the same effect; they delight and astound, leaving the audience to ponder (usually unsuccessfully):

How did they do that?

But while magic has entertained us for thousands of years, it also has a long and colourful history of informing areas of scientific research, from cognitive psychology to treatment of paralysis.
How could such a seemingly innocuous form of entertainment affect such diverse areas?
Uncovering magic’s secrets
In 1893, French psychologist Alfred Binet managed to co-opt five of the country’s most prominent magicians to help him understand illusions.
His interest in the development of cinema led him to record and view their performances frame by frame.
He was able to analyse the movement of the magicians as an animated sequence with the hope of understanding how audiences could be deceived by the magic performed right in front of them.
In his 1894 article La Psychologie de la Prestidigitation, Binet concluded that magical illusions were created by so many little optical tricks that:

to perceive them could be quite as difficult as to count with the naked eye the grains of sand on the seashore.

A 2008 article by a group of research psychologists argued that it was time to acknowledge magic’s influence on the cognitive sciences, opening a new field called the “science of magic”.
In 2010, neuroscientists Stephen Macknik and Susana Martinez-Conde coined the term “neuromagic” in their book Sleights of Mind.
The pair published some of their research findings in Nature, co-authored with not one, but four of the world’s leading magicians.
Like Binet more than a century before, they saw the value of working directly with magicians.
Perceiving blindness
Magic has finally emerged from the box labelled “entertainment” and now shines a light on one of the most perplexing areas of mind studies – perception.
Perception is key in many magic techniques. Audience members will follow a magician’s hand when he or she gestures in a curved line – but not when the line is straight, to give just one example.
Scientific attempts to understand perceptual processes have largely relied on functional Magnetic Resonance Imaging (fMRI) – medical imaging techniques that identify brain activity through changes in its blood flow.
Scientists also study eye movements using head-mounted eye trackers to ascertain objects of visual focus.
But much of our visual perception cannot be understood as a direct fit between seeing something and that thing registering in our attention.
Looking but not seeing
Our everyday perception is littered with episodes that psychologists call “inattentional blindness” and “change blindness”.
In other words, something happens in front of us but because our attention is elsewhere, we don’t register having seen it.
Neurologically speaking, when change occurs gradually it is referred to as change blindness, and one of the best examples of this is British psychologist Richard Wiseman’s colour card changing trick.
If the change occurs abruptly, it’s called inattentional blindness.
An experiment by American psychologists Daniel Simons and Christopher Chabris is by far the most famous illustration of this, and won them the Ig Nobel Prize in 2005.
But while the colour card changing “trick” and Simons and Chabris’ experiment aren’t technically magic tricks, magic provides an arena for observing how our visual perception is often at odds with the objects and events happening before our very eyes.
Misdirection is a standard technique of the magician’s palette and demonstrates the perceptual rift between looking at something and attending to it and it is this rift that fascinates neuroscientists and neuropsychologists.
Commonly thought to be about speed – isn’t the hand quicker than the eye? – misdirection is actually more about leading us to focus only on a particular area.
When a magician throws a ball into the air and it seemingly vanishes, the trick works because the audience is following the magician’s gaze – not his hand.
After really throwing the ball into the air numerous times and then simply performing the same movement in every way but without the ball, most people will see a ball fly into the air and disappear.
The magician has misdirected your gaze into following his and deployed a combination of inattentional and change blindness.
A neurological perspective
What we also learn from this neurologically is that implied movement stimulates brain functioning in much the same way as watching an actual movement.
That your gaze can differ from your attention is something that magicians have long exploited.
So now neurologists are looking to magic to help answer questions such as:

Why don’t we see always something right in front of us?
Why do our eyes more easily follow curved rather than straight gestures across space?

Magic, which has exploited such aspects of the visual for centuries, offers us a framework to explore perception in an intriguing way, and the potential for understanding our perceptual system by investigating how magic exploits its blindness and gaps is enormous.
It has become a sophisticated research method and field helping to create more intuitive human-computer interface designs and advance rehabilitation techniques for people physically impaired by neurological conditions like strokes.
It is even being used to study problems in social responsiveness across the autism spectrum.
All we need to do now is convince more magicians to give up their secrets – but how easy that will be remains to be seen.

The science of magic: it’s not all hocus pocus

Think of your favourite magic trick. Is it as grandiose as David Copperfield’s Death Saw, or is it as simple as making a coin disappear in front of your very eyes?

These two very different tricks have the same effect; they delight and astound, leaving the audience to ponder (usually unsuccessfully):

How did they do that?

But while magic has entertained us for thousands of years, it also has a long and colourful history of informing areas of scientific research, from cognitive psychology to treatment of paralysis.

How could such a seemingly innocuous form of entertainment affect such diverse areas?

Uncovering magic’s secrets

In 1893, French psychologist Alfred Binet managed to co-opt five of the country’s most prominent magicians to help him understand illusions.

His interest in the development of cinema led him to record and view their performances frame by frame.

He was able to analyse the movement of the magicians as an animated sequence with the hope of understanding how audiences could be deceived by the magic performed right in front of them.

In his 1894 article La Psychologie de la Prestidigitation, Binet concluded that magical illusions were created by so many little optical tricks that:

to perceive them could be quite as difficult as to count with the naked eye the grains of sand on the seashore.

A 2008 article by a group of research psychologists argued that it was time to acknowledge magic’s influence on the cognitive sciences, opening a new field called the “science of magic”.

In 2010, neuroscientists Stephen Macknik and Susana Martinez-Conde coined the term “neuromagic” in their book Sleights of Mind.

The pair published some of their research findings in Nature, co-authored with not one, but four of the world’s leading magicians.

Like Binet more than a century before, they saw the value of working directly with magicians.

Perceiving blindness

Magic has finally emerged from the box labelled “entertainment” and now shines a light on one of the most perplexing areas of mind studies – perception.

Perception is key in many magic techniques. Audience members will follow a magician’s hand when he or she gestures in a curved line – but not when the line is straight, to give just one example.

Scientific attempts to understand perceptual processes have largely relied on functional Magnetic Resonance Imaging (fMRI) – medical imaging techniques that identify brain activity through changes in its blood flow.

Scientists also study eye movements using head-mounted eye trackers to ascertain objects of visual focus.

But much of our visual perception cannot be understood as a direct fit between seeing something and that thing registering in our attention.

Looking but not seeing

Our everyday perception is littered with episodes that psychologists call “inattentional blindness” and “change blindness”.

In other words, something happens in front of us but because our attention is elsewhere, we don’t register having seen it.

Neurologically speaking, when change occurs gradually it is referred to as change blindness, and one of the best examples of this is British psychologist Richard Wiseman’s colour card changing trick.

If the change occurs abruptly, it’s called inattentional blindness.

An experiment by American psychologists Daniel Simons and Christopher Chabris is by far the most famous illustration of this, and won them the Ig Nobel Prize in 2005.

But while the colour card changing “trick” and Simons and Chabris’ experiment aren’t technically magic tricks, magic provides an arena for observing how our visual perception is often at odds with the objects and events happening before our very eyes.

Misdirection is a standard technique of the magician’s palette and demonstrates the perceptual rift between looking at something and attending to it and it is this rift that fascinates neuroscientists and neuropsychologists.

Commonly thought to be about speed – isn’t the hand quicker than the eye? – misdirection is actually more about leading us to focus only on a particular area.

When a magician throws a ball into the air and it seemingly vanishes, the trick works because the audience is following the magician’s gaze – not his hand.

After really throwing the ball into the air numerous times and then simply performing the same movement in every way but without the ball, most people will see a ball fly into the air and disappear.

The magician has misdirected your gaze into following his and deployed a combination of inattentional and change blindness.

A neurological perspective

What we also learn from this neurologically is that implied movement stimulates brain functioning in much the same way as watching an actual movement.

That your gaze can differ from your attention is something that magicians have long exploited.

So now neurologists are looking to magic to help answer questions such as:

Why don’t we see always something right in front of us?

Why do our eyes more easily follow curved rather than straight gestures across space?

Magic, which has exploited such aspects of the visual for centuries, offers us a framework to explore perception in an intriguing way, and the potential for understanding our perceptual system by investigating how magic exploits its blindness and gaps is enormous.

It has become a sophisticated research method and field helping to create more intuitive human-computer interface designs and advance rehabilitation techniques for people physically impaired by neurological conditions like strokes.

It is even being used to study problems in social responsiveness across the autism spectrum.

All we need to do now is convince more magicians to give up their secrets – but how easy that will be remains to be seen.

Filed under perception magic tricks neuroimaging inattentional blindness change blindness psychology neuroscience science

95 notes

Memory, the Adolescent Brain, and Lying: Understanding the Limits of Neuroscientific Evidence in the Law

Brain scans are increasingly able to reveal whether or not you believe you remember some person or event in your life. In a new study presented at a cognitive neuroscience meeting today, researchers used fMRI brain scans to detect whether a person recognized scenes from their own lives, as captured in some 45,000 images by digital cameras. The study is seeking to test the capabilities and limits of brain-based technology for detecting memories, a technique being considered for use in legal settings.

The advancement and falling costs of fMRI, EEG, and other techniques will one day make it more practical for this type of evidence to show up in court,” says Francis Shen of the University of Minnesota Law School, who is chairing a session on neuroscience and the law at a meeting of the Cognitive Neuroscience Society (CNS) in San Francisco this week. “But technological advancement on its own doesn’t necessarily lead to use in the law.” But as the technology has advanced and as the legal system desires to use more empirical evidence, neuroscience and the law are intersecting more often than in previous decades.

In U.S. courts, neuroscientific evidence has been used largely in cases involving brain injury litigation or questions of impaired ability. In some cases outside the United States, however, courts have used brain-based evidence to check whether a person has memories of legally relevant events, such as a crime. New companies also are claiming to use brain scans to detect lies – although judges have not yet admitted this evidence in U.S. courts. These developments have rallied some in the neuroscience community to take a critical look at the promise and perils of such technology in addressing legal questions – working in partnership with legal scholars through efforts such as the MacArthur Foundation Research Network on Law and Neuroscience.

Recognizing your own memories

What inspired Anthony Wagner, a cognitive neuroscientist at Stanford University, to test fMRI uses for memory detection was a case in June 2008 in Mumbai, India, in which a judge cited EEG evidence as indicating that a murder suspect held knowledge about the crime that only the killer could possess. “It appeared that the brain data held considerable sway,” says Wagner, who points out that the methods used in that case have not been subject to extensive peer review.

Since then, Wagner and colleagues have conducted a number of experiments to test whether brain scans can be used to discriminate between stimuli that people perceive as old or new, as well as more objectively, whether or not they have previously encountered a particular person, place, or thing. To date, Wagner and colleagues have had success in the lab using fMRI-based analyses to determine whether someone recognizes a person or perceives them as unfamiliar, but not in determining whether in fact they have actually seen them before.

In a new study presented today, his team sought to take the experiments out of the lab and into the real world by outfitting participants with digital cameras around their necks that automatically took photos of the participants’ everyday experiences. Over a multi-week period, the cameras yielded 45,000 photos per participant.

Wagner’s team then took brief photo sequences of individual events from the participants’ lives and showed them to the participants in the fMRI scanner, along with photo sequences from other subjects as the control stimuli. The researchers analyzed their brain patterns to determine whether or not the participants were recognizing the sequences as their own. “We did quite well with most subjects, with a mean accuracy of 91% in discriminating between event sequences that the participant recognized as old and those that the participant perceived as unfamiliar, ” Wagner says. “These findings indicate that distributed patterns of brain activity, as measured with fMRI, carry considerable information about an individual’s subjective memory experience – that is, whether or not they are remembering the event.”

In another new study, Wagner and colleagues tested whether people can “beat the technology” by using countermeasures to alter their brain patterns. Back in the lab, the researchers showed participants individual faces and later asked them whether the faces were old or new. “Halfway through the memory test, we stopped and told them ‘What we are actually trying to do is read out from your brain patterns whether or not you are recognizing the face or perceiving it as novel, and we’ve been successful with other subjects in doing this in the past. Now we want you to try to beat the system by altering your neural responses.’” The researchers instructed the participants to think about a familiar person or experience when presented with a new face, and to focus on a novel feature of the face when presented a previously encountered face.

In the first half of the test, during which participants were just making memory decisions, we were well above chance in decoding from brain patterns whether they recognized face or perceived it as novel. However, in the second half of the test, we were unable to classify whether or not they recognized the face nor whether the face was objectively old or new,” Wagner says. Within a forensic setting, Wagner says, it is conceivable that a suspect could use such measures to try to mask the brain patterns associated with memory.

Wagner says that his work to date suggests that the technology may have some utility in reading out brain patterns in cooperative individuals but that the uses are much more uncertain with uncooperative individuals. However, Wagner stresses that the method currently does not distinguish well between whether a person’s memory reflects true or false recognition. He says that it is premature to consider such evidence in the courts because many additional factors await future testing, including the effects of stress, practice, and time between the experience and the memory test.

Overgeneralizing the adolescent brain

A general challenge to the use of neuroscientific evidence in legal settings, Wagner says, is that most studies are at the group rather than the individual level. “The law cares about a particular individual in a particular situation right in front of them,” he says, and the science often cannot speak to that specificity.

Shen cites the challenge of making individualized inference from group-based data as one of the major ones facing use of neuroscience evidence in the court. “This issue has come up in the context of juvenile justice, where the adolescent brain development data confirms behavioral data that on average 17-year-olds are more impulsive than adults, but does not tell us whether a particular 17-year-old, namely the one on trial, was less able to control his/her actions on the day and in the manner in question,” he says.

Indeed, B.J. Casey of the Weill Medical College of Cornell University says that too often we overgeneralize the lack of self control among adolescents. Although adolescents do show poor self control as a group, some situations and individuals are more prone to this breakdown than others.

It is not that teens can’t make decisions, they can and they can do so efficiently,” Casey says. “It is when they must make decisions in the heat of the moment – in presence of potential or perceived threats, among peers – that the court should consider diminished responsibility of teens while still holding them accountable for their behavior.” Research suggests that this diminished ability is due to the immature development of circuitry involved in processing of negative or positive cues in the environment in the subcortical limbic regions and then in regulating responses to those cues in the prefrontal cortex.

The body of research to date is at the group-level, however, and is not yet able to comment on the neurobiological maturity of an individual adolescent. To help provide more guidance on this issue in legal settings, Casey and colleagues are working alongside legal scholars on a developmental imaging study, funded by the MacArthur Foundation, that is examining behaviors relevant to juvenile criminal behavior, including impulsivity and peer influence.

Making real-world connections

The same type of work – to connect brain imaging to particular behaviors in the real-world – is ongoing in a number of other areas, including fMRI-based lie detection and linking negligence to specific mental states. “It’s a big leap to go from a laboratory setting, in which impulse control may be measured by one’s ability to not press a button in response to a stimulus, to the real-world, where the question is whether someone had requisite self-control not to tie up an innocent person and throw them off a bridge.” Shen says. “I don’t see neuroscience solving these big problems anytime soon, and so the question for law becomes: What do we do with this uncertainty? I think this is where we’re at right now, and where we’ll be for some time.”

With a few notable exceptions such as death penalty cases, cases where a juvenile is facing a very stiff sentence, and litigating brain injury claims, ‘law and neuroscience’ is not familiar to most lawyers,” Shen says. “But this might change – and soon.” The ongoing work is vital, he says, for laying a foundation for a future that’s yet to come, and he hopes that more neuroscientists will increasingly collaborate with legal scholars.

Filed under brain scans neuroimaging brain activity law memory neuroscience adolescent brain science

111 notes

Babies’ brains to be mapped in the womb and after birth
UK scientists have embarked on a six-year project to map how nerve connections develop in babies’ brains while still in the womb and after birth.
By the time a baby takes its first breath many of the key pathways between nerves have already been made. And some of these will help determine how a baby thinks or sees the world, and may have a role to play in the development of conditions such as autism, scientists say.
But how this rich neural network assembles in the baby before birth is relatively unchartered territory.
Researchers from Guy’s and St Thomas’ Hospital, King’s College London, Imperial College and Oxford University aim to produce a dynamic wiring diagram of how the brain grows, at a level of detail that they say has been impossible until now.
They hope that by charting the journeys of bundles of nerves in the final three months of pregnancy, doctors will be able to understand more about how they can help in situations when this process goes wrong.
Prof David Edwards, director of the Centre for the Developing Brain, who is leading the research, says: “There is a distressing number of children in our society who grow up with problems because of things that happen to them around the time of birth or just before birth.
"It is very important to be able to scan babies before they are born, because we can capture a period when an awful lot is changing inside the brain, and it is a time when a great many of the things that might be going wrong do seem to be going wrong."
'Neural networks'
The study - known as the Developing Human Connectome Project - hopes to look at more than 1,500 babies, studying many aspects of their neurological development.
By examining the brains of babies while they are still growing in the womb, as well as those born prematurely and at full term, the scientists will try to define baselines of normal development and investigate how these may be affected by problems around birth.
And they plan to share their map with the wider research community.
Central to this project are advanced MRI scanning techniques, which the scientists say are able to pick up on details of the growing brain that have been difficult to capture until now.
While in the womb, foetuses are free to somersault in their amniotic sacs, and this constant movement has so far hindered clear images of growing brains.
But researchers at the Centre for the Developing Brain have found ways to counter the effects of these movements, building up full three-dimensional pictures while the foetus is in motion.
And by placing the MRI machine in the neonatal intensive care unit at Evelina Children’s Hospital in London they are one of the few centres in the world to have a scanner in such close proximity to the babies who often need it most, Prof Edwards says.
This means the same scanning system can be used to find out more about the brains of the sickest and smallest newborn babies, he says.
'Macro level'
Daniel Rueckert, professor of visual information processing at Imperial College London, who is also involved in the research, says: “We are trying to look at brain connectivity in two ways: firstly, from a structural perspective, to find out which parts of the brain are wired to other parts. And secondly we are looking at functional connectivity - how strongly two brain regions are linked across time and activity.”
But Prof Partha Mitra, a neuroscientist at Cold Spring Harbor Laboratory, New York state, says we need to be aware of the limitations of the technology in use.
"It would obviously be a very good thing to know more about the circuits in the developing human brain. Much of what we know hasn’t changed in a hundred years and has come from dissection studies.
"But we need to keep in mind the imaging techniques we have are indirect - we can’t open up a human brain and look at the connections while someone is alive so we rely on these non-invasive methods. But there is a big gap between the real circuits in the brain and what images can show us."
Prof Rueckert acknowledges that this map will provide a “macro-level” view of the developing brain and not be the “final answer”.
But he points to early results from the adult version of this project - the Human Connectome Project, based in the US: “There is so much evidence already from the adult project that there are significant changes in the brain that can be mapped with the technology we have now.
"It will be incredibly useful to be able to do this with the still growing and developing brain - perhaps giving us more time to intervene when things go wrong."

Babies’ brains to be mapped in the womb and after birth

UK scientists have embarked on a six-year project to map how nerve connections develop in babies’ brains while still in the womb and after birth.

By the time a baby takes its first breath many of the key pathways between nerves have already been made. And some of these will help determine how a baby thinks or sees the world, and may have a role to play in the development of conditions such as autism, scientists say.

But how this rich neural network assembles in the baby before birth is relatively unchartered territory.

Researchers from Guy’s and St Thomas’ Hospital, King’s College London, Imperial College and Oxford University aim to produce a dynamic wiring diagram of how the brain grows, at a level of detail that they say has been impossible until now.

They hope that by charting the journeys of bundles of nerves in the final three months of pregnancy, doctors will be able to understand more about how they can help in situations when this process goes wrong.

Prof David Edwards, director of the Centre for the Developing Brain, who is leading the research, says: “There is a distressing number of children in our society who grow up with problems because of things that happen to them around the time of birth or just before birth.

"It is very important to be able to scan babies before they are born, because we can capture a period when an awful lot is changing inside the brain, and it is a time when a great many of the things that might be going wrong do seem to be going wrong."

'Neural networks'

The study - known as the Developing Human Connectome Project - hopes to look at more than 1,500 babies, studying many aspects of their neurological development.

By examining the brains of babies while they are still growing in the womb, as well as those born prematurely and at full term, the scientists will try to define baselines of normal development and investigate how these may be affected by problems around birth.

And they plan to share their map with the wider research community.

Central to this project are advanced MRI scanning techniques, which the scientists say are able to pick up on details of the growing brain that have been difficult to capture until now.

While in the womb, foetuses are free to somersault in their amniotic sacs, and this constant movement has so far hindered clear images of growing brains.

But researchers at the Centre for the Developing Brain have found ways to counter the effects of these movements, building up full three-dimensional pictures while the foetus is in motion.

And by placing the MRI machine in the neonatal intensive care unit at Evelina Children’s Hospital in London they are one of the few centres in the world to have a scanner in such close proximity to the babies who often need it most, Prof Edwards says.

This means the same scanning system can be used to find out more about the brains of the sickest and smallest newborn babies, he says.

'Macro level'

Daniel Rueckert, professor of visual information processing at Imperial College London, who is also involved in the research, says: “We are trying to look at brain connectivity in two ways: firstly, from a structural perspective, to find out which parts of the brain are wired to other parts. And secondly we are looking at functional connectivity - how strongly two brain regions are linked across time and activity.”

But Prof Partha Mitra, a neuroscientist at Cold Spring Harbor Laboratory, New York state, says we need to be aware of the limitations of the technology in use.

"It would obviously be a very good thing to know more about the circuits in the developing human brain. Much of what we know hasn’t changed in a hundred years and has come from dissection studies.

"But we need to keep in mind the imaging techniques we have are indirect - we can’t open up a human brain and look at the connections while someone is alive so we rely on these non-invasive methods. But there is a big gap between the real circuits in the brain and what images can show us."

Prof Rueckert acknowledges that this map will provide a “macro-level” view of the developing brain and not be the “final answer”.

But he points to early results from the adult version of this project - the Human Connectome Project, based in the US: “There is so much evidence already from the adult project that there are significant changes in the brain that can be mapped with the technology we have now.

"It will be incredibly useful to be able to do this with the still growing and developing brain - perhaps giving us more time to intervene when things go wrong."

Filed under Developing Human Connectome Project infants brain mapping brain development neuroimaging neuroscience science

64 notes

Lights, Chemistry, Action: New Method for Mapping Brain Activity
Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.
"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.
The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners. 
The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy. 
The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.
"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response." 
The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques. 
The method can reveal even more precise effects.
"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.
Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.
"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”
After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.
"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.

Lights, Chemistry, Action: New Method for Mapping Brain Activity

Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.

"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.

The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners. 

The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy. 

The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.

"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response." 

The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques. 

The method can reveal even more precise effects.

"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.

Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.

"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”

After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.

"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.

Filed under brain brain activity brain cells neurodegenerative diseases neuroimaging optogenetics neuroscience science

397 notes

Today the White House announced its goal to fund Brain Research, in hopes of furthering understanding of brain disorders and degenerative diseases such as Alzheimer’s.

Two years ago Scientific American magazine sent me to the University of Texas at Austin to borrow a human brain. They needed me to photograph a normal, adult, non-dissected brain that the university had obtained by trading a syphilitic lung with another institution. The specimen was waiting for me, but before I left they asked if I’d like to see their collection.

I walked into a storage closet filled with approximately one-hundred human brains, none of them normal, taken from patients at the Texas State Mental Hospital. The brains sat in large jars of fluid, each labeled with a date of death or autopsy, a brief description in Latin, and a case number. These case numbers corresponded to micro film held by the State Hospital detailing medical histories. But somehow, regardless of how amazing and fascinating this collection was, it had been largely untouched, and unstudied for nearly three decades.

Driving back to my studio with a brain snugly belted into the passenger seat, I quickly became obsessed with the idea of photographing the collection, preserving the already decaying brains, and corresponding the images to their medical histories. I met with my friend Alex Hannaford, a features journalist, to help me find the collection’s history dating back to the 1950s.

Over the past year while working this idea into a book, we’ve learned how heavily storied the collection is. That it was originally intended to be displayed and studied, but without funding it instead stagnated. And that the microfilm histories of each brain had been destroyed years ago.

My original vision of a photo book accompanied by medical data and a comprehensive essay turned into a story of loss and neglect. But Alex continued to pursue some scientific hope for the collection. After discussions with various neuroscientists we learned that through MRI technology and special techniques in DNA scanning there is still hope. And with the new possibilities of federal brain research funding, this collection’s secrets may yet be unlocked.

As we begin the hunt for someone to publish my 230 images accompanied by Alex’s 14,000 word essay, the University has found new interest in the collection. They currently are planning to make MRI scans of the brains.

Malformed – A Collection of Human Brains from the Texas State Mental Hospital by Adam Voorhes

Filed under brain brain research mental illness neuroimaging Adam Voorhes photography neuroscience science

159 notes

The subtle hallmarks of psychiatric illness can reveal themselves even remotely

Most people are so attuned to the nuances of social interaction that they can detect clues to mental illness while playing a strategy game with someone they have never met.

image

That was the finding of a team of scientists led by Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute. The researchers discovered that healthy people and those with borderline personality disorder displayed different patterns of behavior while playing an online strategy game, so much so that when healthy players played people with borderline personality disorder, they gave up on trying to predict what their partners would do next.

For their large neuroimaging study, the scientists used a multiround social interaction game, the investor-trustee game, to study the level of strategic thinking in 195 pairs of subjects. In each pair, one player played the investor and the other the trustee. The investor chose how much money to send the trustee, and the trustee in turn decided how much to return to the investor. Profit required the cooperation of both players.

“This classic tit-for-tat game allows us to probe people’s responses to the social gestures of others,” said Montague, who also directs the Computational Psychiatry Unit, an academic center that uses computational models to understand mental disease. “It further allows us to see how people form models of one another. These insights are important for understanding a range of mental illnesses, as the ability to infer other people’s intentions is an essential component of healthy cognition.”

The scientists classified the investors according to varying levels of strategic depth of thought. The healthy subjects fell into three categories: about half simply responded to the amount the other player sent; about one-quarter built a model of their partner’s behavior; and the remaining quarter considered not just their model of their partner, but also their partner’s models of them. 

Not surprisingly, the depth-of-thought style of play correlated with success, with the players who looked deeper into interactions making considerably more money than those who played at a shallow level.

When healthy subjects played people with borderline personality disorder, though, they were far less likely to exhibit depth of thought.

“People with borderline personality disorder are characterized by their unstable relationships, and when they play this game, they tend to break cooperation,” said Montague. “The healthy subjects picked up on the erratic behavior, likely without even realizing it, and far fewer played strategically.”

Notably, the functional magnetic resonance imaging of the subjects’ brains revealed that each category of player showed distinct neural correlates of learning signals associated with differing depths of thought. The scientists used hyperscanning, a technique Montague invented that enables subjects in different brain scanners to interact in real time, regardless of geography. Hyperscanning allows scientists to eavesdrop on brain activity during social exchanges in scanners, whether across the hallway or across the world.

“We’re always modeling other people, and our brains have a substantial amount of neural tissue devoted to pondering our interactions with other people,” Montague said. “This study is a start to turning neural signals into numbers – not just theory-of-mind arguments, but actual numbers. And when we can do that across thousands of people, we should start to gain insights into psychopathologies – what circuits are involved, what brain regions are engaged, and how injuries, congenital disorders, and genetic defects might play into psychiatric illness.”

Montague believes the study represents a significant contribution to the field of computational psychiatry, which seeks to bring computational clout to efforts to understand mental dysfunction. “Traditional psychiatric categories are useful yet incomplete,” said Montague, who delivered a TEDGlobal talk on the growing field of computational psychiatry last year. “Computational psychiatry enables us to redefine with a new lexicon – a mathematical one – the standard ways we think about mental illness.”

Computationally based insights may one day help psychiatry achieve better precision in diagnosis and treatment, Montague said. But until scientists have the right instruments, they cannot even begin to make those connections.

“The exquisite sensitivity that most people have to social gestures gives us a valuable opening,” Montague said. “We’re hoping to invent a tool – almost a human inkblot test – for identifying and characterizing mental disorders in which social interactions go awry.”

(Source: vtnews.vt.edu)

Filed under mental illness social interaction borderline personality disorder strategic thinking neuroimaging psychology neuroscience science

46 notes

Experts Call for Research on Prevalence of Delayed Neurological Dysfunction After Head Injury

One of the most controversial topics in neurology today is the prevalence of serious permanent brain damage after traumatic brain injury (TBI). Long-term studies and a search for genetic risk factors are required in order to predict an individual’s risk for serious permanent brain damage, according to a review article published by Sam Gandy, MD, PhD, from the Icahn School of Medicine at Mount Sinai in a special issue of Nature Reviews Neurology dedicated to TBI.

About one percent of the population in the developed world has experienced TBI, which can cause serious long-term complications such as Alzheimer’s disease (AD) or chronic traumatic encephalopathy (CTE), which is marked by neuropsychiatric features such as dementia, Parkinson’s disease, depression, and aggression. Patients may be normal for decades after the TBI event before they develop AD or CTE. Although first described in boxers in the 1920s, the association of CTE with battlefield exposure and sports, such as football and hockey, has only recently begun to attract public attention.  

"Athletes such as David Duerson and Junior Seau have brought to light the need for preventive measures and early diagnosis of CTE, but it remains highly controversial because hard data are not available that enable prediction of the prevalence, incidence, and individual risk for CTE," said Dr. Gandy, who is Professor of Neurology and Psychiatry and Director of the Center for Cognitive Health at Mount Sinai. "We need much more in the way of hard facts before we can advise the public of the proper level of concern."

Led by Dr. Gandy, the authors evaluated the pathological impact of single-incident TBI, such as that sustained during military combat; and mild, repetitive TBI, as seen in boxers and National Football League (NFL) players to learn what measures need to be taken to identify risk and incidence early and reduce long-term complications.

Mild, repetitive TBI, as is seen in boxers, football players, and occasionally military veterans who suffer multiple blows to the head, is most often associated with CTE, or a condition called “boxer’s dementia.” Boxing scoring includes a record of knockouts, providing researchers with a starting point in interpreting an athlete’s risk. But no such records exist for NFL players or soldiers on the battlefield.

Dr. Gandy and the authors of the Nature Reviews Neurology piece suggest recruiting large cohorts of players and military veterans in multi-center trials, where players and soldiers maintain a TBI diary for the duration of their lives. The researchers also suggest a genome-wide association study to clearly identify risk factors of CTE. “Confirmed biomarkers of risk, diagnostic tools, and long-term trials are needed to fully characterize this disease and develop prevention and treatment strategies,” said Dr. Gandy.  

Amyloid imaging, which has recently been approved by the U.S. Food and Drug Administration, may be useful as a monitoring tool in TBI, since amyloid plaques are a hallmark symptom of AD-type neurodegeneration. Amyloid imaging consists of a PET scan with an injection of a contrast agent called florbetapir, which binds to amyloid plaque in the brain, allowing researchers to visualize plaque deposits and determine whether the diagnosis is CTE or AD, and monitor progression over time. Tangle imaging is expected to be available soon, complementing amyloid imaging and providing an affirmative diagnosis of CTE. Dr. Gandy and colleagues recently reported the use of amyloid imaging to exclude AD in a retired NFL player with memory problems under their care at Mount Sinai.  

Clinical diagnosis and evaluation of mild, repetitive TBI is a challenge, indicating a significant need for new biomarkers to identify damage, report the authors. Measuring cerebrospinal fluid (CSF) may reflect damage done to neurons post-TBI. Previous research has identified a marked increase in CSF biomarkers in boxers when the CSF is taken soon after a fight, and this may predict which boxers are more likely to develop detrimental long-term effects. CSF samples are now only obtained by invasive lumbar puncture; a blood test would be preferable.

"Biomarkers would be a valuable tool both from a research perspective in comparing them before and after injury and from a clinical perspective in terms of diagnostic and prognostic guidance," said Dr. Gandy. "Having the biomarker information will also help us understand the mechanism of disease development, the reasons for its delayed progression, and the pathway toward effective therapeutic interventions."

Currently, there are no treatments for boxer’s dementia or CTE, but these diseases are preventable. “With more protective equipment, adjustments in the rules of the game, and overall education among athletes, coaches, and parents, we should be able to offer informed consent to prospective sports players and soldiers. With the right combination of identified genetic risk factor, biomarkers, and better drugs, we should be able to dramatically improve the outcome of TBI and prevent the long-term, devastating effects of CTE,” said Dr. Gandy.

(Source: mountsinai.org)

Filed under brain damage brain injury TBI neurodegeneration neuroimaging neurology neuroscience science

667 notes

Exploring Temple Grandin’s Brain

The world’s most famous person with autism uses her unusual cognitive abilities to reduce animal suffering.

Animal scientist Temple Grandin has an extraordinary mind. Probably the world’s most famous person with autism, she designed widely used livestock handling systems to reduce animal suffering. She is not just autistic but an autistic savant, meaning that she has unusual cognitive abilities, such as a photographic memory and excellent spatial skills. She “thinks in pictures,” she says, helping her understand what animals perceive.

Her brain is equally remarkable, according to a team of neuroimaging experts who study brain changes in autism at the University of Utah. Neuroscientist Jason Cooperrider and colleagues scanned Grandin’s brain using three different methods: high-resolution magnetic resonance imaging (MRI), which captures the structure of the brain; diffusion tensor imaging (DTI), a method to trace the connections between brain regions; and functional MRI, which indicates brain activity. The images reveal an unusual neural landscape that reflects Grandin’s deficits and talents. 

Overall, the right side of her brain dominates. One theory of autistic savantism suggests that during fetal development or early in life, some developmental abnormality affects the brain’s left side, resulting in the difficulties that many autistic people have with words and social interaction, functions typically processed by the left hemisphere.

To make up for this, the right hemisphere sometimes overcompensates, which can lead to special abilities in music, art, and visual memory. Savantism is not well-understood, but between a tenth and a third of people with autism may have some of these abilities. 

Cooperrider’s team also discovered that Grandin’s amygdala, the almond-shaped organ said to play an important role in emotional processing, is larger than normal. This was not a surprising finding because among other functions, this region processes fear and anxiety, affective states often affected by autism. Her fusiform gyrus is smaller than normal—also not a surprise, since this region is involved in recognizing faces, a social skill that autism may disrupt.

Every brain is different, especially where autism is concerned, and Cooperrider’s study compares Grandin’s brain with only three controls, not enough to draw broad conclusions. But some of the patterns Cooperrider and his colleagues discovered back up other studies, and suggest new regions to explore.

Filed under brain brain development Temple Grandin autism savants neuroimaging neuroscience psychology science

136 notes

MRI shows brain abnormalities in migraine patients

A new study suggests that migraines are related to brain abnormalities present at birth and others that develop over time. The research is published online in the journal Radiology.

image

Migraines are intense, throbbing headaches, sometimes accompanied by nausea, vomiting and sensitivity to light. Some patients experience auras, a change in visual or sensory function that precedes or occurs during the migraine. More than 300 million people suffer from migraines worldwide, according to the World Health Organization.

Previous research on migraine patients has shown atrophy of cortical regions in the brain related to pain processing, possibly due to chronic stimulation of those areas. Cortical refers to the cortex, or outer layer of the brain.

Much of that research has relied on voxel-based morphometry, which provides estimates of the brain’s cortical volume. In the new study, Italian researchers used a different approach: a surface-based MRI method to measure cortical thickness.

"For the first time, we assessed cortical thickness and surface area abnormalities in patients with migraine, which are two components of cortical volume that provide different and complementary pieces of information," said Massimo Filippi, M.D., director of the Neuroimaging Research Unit at the University Ospedale San Raffaele and professor of neurology at the University Vita-Salute’s San Raffaele Scientific Institute in Milan. "Indeed, cortical surface area increases dramatically during late fetal development as a consequence of cortical folding, while cortical thickness changes dynamically throughout the entire life span as a consequence of development and disease."

Dr. Filippi and colleagues used magnetic resonance imaging (MRI) to acquire T2-weighted and 3-D T1-weighted brain images from 63 migraine patients and 18 healthy controls. Using special software and statistical analysis, they estimated cortical thickness and surface area and correlated it with the patients’ clinical and radiologic characteristics.

Compared to controls, migraine patients showed reduced cortical thickness and surface area in regions related to pain processing. There was only minimal anatomical overlap of cortical thickness and cortical surface area abnormalities, with cortical surface area abnormalities being more pronounced and distributed than cortical thickness abnormalities. The presence of aura and white matter hyperintensities—areas of high intensity on MRI that appear to be more common in people with migraine—was related to the regional distribution of cortical thickness and surface area abnormalities, but not to disease duration and attack frequency.

"The most important finding of our study was that cortical abnormalities that occur in patients with migraine are a result of the balance between an intrinsic predisposition, as suggested by cortical surface area modification, and disease-related processes, as indicated by cortical thickness abnormalities," Dr. Filippi said. "Accurate measurements of cortical abnormalities could help characterize migraine patients better and improve understanding of the pathophysiological processes underlying the condition."

Additional research is needed to fully understand the meaning of cortical abnormalities in the pain processing areas of migraine patients, according to Dr. Filippi.

"Whether the abnormalities are a consequence of the repetition of migraine attacks or represent an anatomical signature that predisposes to the development of the disease is still debated," he said. "In my opinion, they might contribute to make migraine patients more susceptible to pain and to an abnormal processing of painful conditions and stimuli."

The researchers are conducting a longitudinal study of the patient group to see if their cortical abnormalities are stable or tend to worsen over the course of the disease. They are also studying the effects of treatments on the observed modifications of cortical folding and looking at pediatric patients with migraine to assess whether the abnormalities represent a biomarker of the disease.

(Source: eurekalert.org)

Filed under brain migraines cortex cortical abnormalities neuroimaging neuroscience science

free counters