Neuroscience

Articles and news from the latest research reports.

109 notes

Subconscious mental categories help brain sort through everyday experiences
Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.
But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).
Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.
So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.
This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.
"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."
Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.
The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.
"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.
"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."
During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.
After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.
The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.
The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.
The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.
"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."
Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.
"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."
The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.
"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Subconscious mental categories help brain sort through everyday experiences

Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.

But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).

Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.

So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.

This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.

"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."

Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.

The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.

"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.

"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."

During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.

After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.

The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.

The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.

The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.

"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."

Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.

"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."

The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.

"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Filed under brain brain processes prediction error experiences events psychology neuroscience science

107 notes

Spring cleaning in your brain: U-M stem cell research shows how important it is
Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.
Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.
In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.
It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.
The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.
In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.
As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.
“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”
Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.
Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.
A long path to discovery
The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.
Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.
Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.
“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.
In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.
But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.
“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.
Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.

Spring cleaning in your brain: U-M stem cell research shows how important it is

Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.

Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.

In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.

It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.

The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.

In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.

As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.

“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”

Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.

Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.

A long path to discovery

The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.

Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.

Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.

“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.

In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.

But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.

“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.

Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.

Filed under brain neurons stem cells autophagy proteins nervous system neuroscience science

114 notes

First objective measure of pain discovered in brain scan patterns
For the first time, scientists have been able to predict how much pain people are feeling by looking at images of their brains, according to a new study led by the University of Colorado Boulder.
The findings, published today in the New England Journal of Medicine, may lead to the development of reliable methods doctors can use to objectively quantify a patient’s pain. Currently, pain intensity can only be measured based on a patient’s own description, which often includes rating the pain on a scale of one to 10. Objective measures of pain could confirm these pain reports and provide new clues into how the brain generates different types of pain.
The new research results also may set the stage for the development of methods using brain scans to objectively measure anxiety, depression, anger or other emotional states.
“Right now, there’s no clinically acceptable way to measure pain and other emotions other than to ask a person how they feel,” said Tor Wager, associate professor of psychology and neuroscience at CU-Boulder and lead author of the paper.
The research team, which included scientists from New York University, Johns Hopkins University and the University of Michigan, used computer data-mining techniques to comb through images of 114 brains that were taken when the subjects were exposed to multiple levels of heat, ranging from benignly warm to painfully hot. With the help of the computer, the scientists identified a distinct neurologic signature for the pain.
“We found a pattern across multiple systems in the brain that is diagnostic of how much pain people feel in response to painful heat.” Wager said.
Going into the study, the researchers expected that if a pain signature could be found it would likely be unique to each individual. If that were the case, a person’s pain level could only be predicted based on past images of his or her own brain. But instead, they found that the signature was transferable across different people, allowing the scientists to predict how much pain a person was being caused by the applied heat, with between 90 and 100 percent accuracy, even with no prior brain scans of that individual to use as a reference point.
The scientists also were surprised to find that the signature was specific to physical pain. Past studies have shown that social pain can look very similar to physical pain in terms of the brain activity it produces. For example, one study showed that the brain activity of people who have just been through a relationship breakup — and who were shown an image of the person who rejected them — is similar to the brain activity of someone feeling physical pain.
But when Wager’s team tested to see if the newly defined neurologic signature for heat pain would also pop up in the data collected earlier from the heartbroken participants, they found that the signature was absent.
Finally, the scientists tested to see if the neurologic signature could detect when an analgesic was used to dull the pain. The results showed that the signature registered a decrease in pain in subjects given a painkiller.
The results of the study do not yet allow physicians to quantify physical pain, but they lay the foundation for future work that could produce the first objective tests of pain by doctors and hospitals. To that end, Wager and his colleagues are already testing how the neurologic signature holds up when applied to different types of pain.
“I think there are many ways to extend this study, and we’re looking to test the patterns that we’ve developed for predicting pain across different conditions,” Wager said. “Is the predictive signature different if you experience pressure pain or mechanical pain, or pain on different parts of the body?
“We’re also looking towards using these same techniques to develop measures for chronic pain. The pattern we have found is not a measure of chronic pain, but we think it may be an ‘ingredient’ of chronic pain under some circumstances. Understanding the different contributions of different systems to chronic pain and other forms of suffering is an important step towards understanding and alleviating human suffering.”

First objective measure of pain discovered in brain scan patterns

For the first time, scientists have been able to predict how much pain people are feeling by looking at images of their brains, according to a new study led by the University of Colorado Boulder.

The findings, published today in the New England Journal of Medicine, may lead to the development of reliable methods doctors can use to objectively quantify a patient’s pain. Currently, pain intensity can only be measured based on a patient’s own description, which often includes rating the pain on a scale of one to 10. Objective measures of pain could confirm these pain reports and provide new clues into how the brain generates different types of pain.

The new research results also may set the stage for the development of methods using brain scans to objectively measure anxiety, depression, anger or other emotional states.

“Right now, there’s no clinically acceptable way to measure pain and other emotions other than to ask a person how they feel,” said Tor Wager, associate professor of psychology and neuroscience at CU-Boulder and lead author of the paper.

The research team, which included scientists from New York University, Johns Hopkins University and the University of Michigan, used computer data-mining techniques to comb through images of 114 brains that were taken when the subjects were exposed to multiple levels of heat, ranging from benignly warm to painfully hot. With the help of the computer, the scientists identified a distinct neurologic signature for the pain.

“We found a pattern across multiple systems in the brain that is diagnostic of how much pain people feel in response to painful heat.” Wager said.

Going into the study, the researchers expected that if a pain signature could be found it would likely be unique to each individual. If that were the case, a person’s pain level could only be predicted based on past images of his or her own brain. But instead, they found that the signature was transferable across different people, allowing the scientists to predict how much pain a person was being caused by the applied heat, with between 90 and 100 percent accuracy, even with no prior brain scans of that individual to use as a reference point.

The scientists also were surprised to find that the signature was specific to physical pain. Past studies have shown that social pain can look very similar to physical pain in terms of the brain activity it produces. For example, one study showed that the brain activity of people who have just been through a relationship breakup — and who were shown an image of the person who rejected them — is similar to the brain activity of someone feeling physical pain.

But when Wager’s team tested to see if the newly defined neurologic signature for heat pain would also pop up in the data collected earlier from the heartbroken participants, they found that the signature was absent.

Finally, the scientists tested to see if the neurologic signature could detect when an analgesic was used to dull the pain. The results showed that the signature registered a decrease in pain in subjects given a painkiller.

The results of the study do not yet allow physicians to quantify physical pain, but they lay the foundation for future work that could produce the first objective tests of pain by doctors and hospitals. To that end, Wager and his colleagues are already testing how the neurologic signature holds up when applied to different types of pain.

“I think there are many ways to extend this study, and we’re looking to test the patterns that we’ve developed for predicting pain across different conditions,” Wager said. “Is the predictive signature different if you experience pressure pain or mechanical pain, or pain on different parts of the body?

“We’re also looking towards using these same techniques to develop measures for chronic pain. The pattern we have found is not a measure of chronic pain, but we think it may be an ‘ingredient’ of chronic pain under some circumstances. Understanding the different contributions of different systems to chronic pain and other forms of suffering is an important step towards understanding and alleviating human suffering.”

Filed under brain pain pain intensity chronic pain brain activity neuroscience science

397 notes

Today the White House announced its goal to fund Brain Research, in hopes of furthering understanding of brain disorders and degenerative diseases such as Alzheimer’s.

Two years ago Scientific American magazine sent me to the University of Texas at Austin to borrow a human brain. They needed me to photograph a normal, adult, non-dissected brain that the university had obtained by trading a syphilitic lung with another institution. The specimen was waiting for me, but before I left they asked if I’d like to see their collection.

I walked into a storage closet filled with approximately one-hundred human brains, none of them normal, taken from patients at the Texas State Mental Hospital. The brains sat in large jars of fluid, each labeled with a date of death or autopsy, a brief description in Latin, and a case number. These case numbers corresponded to micro film held by the State Hospital detailing medical histories. But somehow, regardless of how amazing and fascinating this collection was, it had been largely untouched, and unstudied for nearly three decades.

Driving back to my studio with a brain snugly belted into the passenger seat, I quickly became obsessed with the idea of photographing the collection, preserving the already decaying brains, and corresponding the images to their medical histories. I met with my friend Alex Hannaford, a features journalist, to help me find the collection’s history dating back to the 1950s.

Over the past year while working this idea into a book, we’ve learned how heavily storied the collection is. That it was originally intended to be displayed and studied, but without funding it instead stagnated. And that the microfilm histories of each brain had been destroyed years ago.

My original vision of a photo book accompanied by medical data and a comprehensive essay turned into a story of loss and neglect. But Alex continued to pursue some scientific hope for the collection. After discussions with various neuroscientists we learned that through MRI technology and special techniques in DNA scanning there is still hope. And with the new possibilities of federal brain research funding, this collection’s secrets may yet be unlocked.

As we begin the hunt for someone to publish my 230 images accompanied by Alex’s 14,000 word essay, the University has found new interest in the collection. They currently are planning to make MRI scans of the brains.

Malformed – A Collection of Human Brains from the Texas State Mental Hospital by Adam Voorhes

Filed under brain brain research mental illness neuroimaging Adam Voorhes photography neuroscience science

75 notes

In autism, age at diagnosis depends on specific symptoms

The age at which a child with autism is diagnosed is related to the particular suite of behavioral symptoms he or she exhibits, new research from the University of Wisconsin-Madison shows.

Certain diagnostic features, including poor nonverbal communication and repetitive behaviors, were associated with earlier identification of an autism spectrum disorder, according to a study in the April issue of the Journal of the American Academy of Child and Adolescent Psychiatry. Displaying more behavioral features was also associated with earlier diagnosis.

"Early diagnosis is one of the major public health goals related to autism," says lead study author Matthew Maenner, a researcher at the UW-Madison Waisman Center. "The earlier you can identify that a child might be having problems, the sooner they can receive support to help them succeed and reach their potential."

But there is a large gap between current research and what is actually happening in schools and communities, Maenner adds. Although research suggests autism can be reliably diagnosed by age 2, the new analysis shows that fewer than half of children with autism are identified in their communities by age 5.

One challenge is that autism spectrum disorders (ASD) are extremely diverse. According to the criteria outlined in the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition - Text Revision (DSM-IV-TR), the standard handbook used for classification of psychiatric disorders, there are more than 600 different symptom combinations that meet the minimum criteria for diagnosing autistic disorder, one subtype of ASD.

Previous research on age at diagnosis has focused on external factors such as gender, socioeconomic status, and intellectual disability. Maenner and his colleagues instead looked at patterns of the 12 behavioral features used to diagnose autism according to the DSM-IV-TR.

He and Maureen Durkin, a UW-Madison professor of population health and pediatrics and Waisman Center investigator, studied records of 2,757 8-year- olds from 11 surveillance sites in the nationwide Autism and Developmental Disabilities Monitoring Network, run by the Centers for Disease Control and Prevention (CDC). They found significant associations between the presence of certain behavioral features and age at diagnosis.

"When it comes to the timing of autism identification, the symptoms actually matter quite a bit," Maenner says.

In the study population, the median age at diagnosis (the age by which half the children were diagnosed) was 8.2 years for children with only seven of the listed behavioral features but dropped to just 3.8 years for children with all 12 of the symptoms.

The specific symptoms present also emerged as an important factor. Children with impairments in nonverbal communication, imaginary play, repetitive motor behaviors, and inflexibility in routines were more likely to be diagnosed at a younger age, while those with deficits in conversational ability, idiosyncratic speech and relating to peers were more likely to be diagnosed at a later age.

These patterns make a lot of sense, Maenner says, since they involve behaviors that may arise at different developmental times. The findings suggest that children who show fewer behavioral features or whose autism is characterized by symptoms typically identified at later ages may face more barriers to early diagnosis.

But they also indicate that more screening may not always lead to early diagnoses for everyone.

"Increasing the intensity of screening for autism might lead to identifying more children earlier, but it could also catch a lot of people at later ages who might not have otherwise been identified as having autism," Maenner says.

(Source: news.wisc.edu)

Filed under autism ASD diagnosis diagnostic features DSM-IV-TR psychology neuroscience science

170 notes

Researchers Confirm Multiple Genes Robustly Contribute to Schizophrenia Risk in Replication
Multiple genes contribute to risk for schizophrenia and appear to function in pathways related to transmission of signals in the brain and immunity, according to an international study led by Virginia Commonwealth University School of Pharmacy researchers.
By better understanding the molecular and biological mechanisms involved with schizophrenia, scientists hope to use this new genetic information to one day develop and design drugs that are more efficacious and have fewer side effects.
In a study published online in the April issue of JAMA Psychiatry, the JAMA Network journal, researchers used a comprehensive and unique approach to robustly identify genes and biological processes conferring risk for schizophrenia.
The researchers first used 21,953 subjects to examine over a million genetic markers. They then systematically collected results from other kinds of biological schizophrenia studies and combined all these results using a novel data integration approach.
The most promising genetic markers were tested again in a large collection of families with schizophrenia patients, a design that avoids pitfalls that have plagued genetic studies of schizophrenia in the past. The genes they identified after this comprehensive approach were found to have involvement in brain function, nerve cell development and immune response.
“Now that we have genes that are robustly associated with schizophrenia, we can begin to design much more specific experiments to understand how disruption of these genes may affect brain development and function,” said principal investigator Edwin van den Oord, Ph.D., professor and director of the Center for Biomarker Research and Personalized Medicine in the Department of Pharmacotherapy and Outcomes Science at the VCU School of Pharmacy.
“Also, some of these genes provide excellent targets for the development of new drugs,” he said.
One specific laboratory experiment currently underway at VCU to better understand the function of one of these genes, TCF4, is being led by Joseph McClay, Ph.D., a co-author on the study and assistant professor and laboratory director in the VCU Center for Biomarker Research and Personalized Medicine. TCF4 works by switching on other genes in the brain. McClay and colleagues are conducting a National Institutes of Health-funded study to determine all genes that are under the control of TCF4. By mapping the entire network, they aim to better understand how disruptions to TCF4 increase risk for schizophrenia.
“Our results also suggest that the novel data integration approach used in this study is a promising tool that potentially can be of great value in studies of a large variety of complex genetic disorders,” said lead author Karolina A. Aberg, Ph.D., research assistant professor and laboratory co-director of the Center for Biomarker Research and Personalized Medicine in the VCU School of Pharmacy.
(Image: iStockphoto)

Researchers Confirm Multiple Genes Robustly Contribute to Schizophrenia Risk in Replication

Multiple genes contribute to risk for schizophrenia and appear to function in pathways related to transmission of signals in the brain and immunity, according to an international study led by Virginia Commonwealth University School of Pharmacy researchers.

By better understanding the molecular and biological mechanisms involved with schizophrenia, scientists hope to use this new genetic information to one day develop and design drugs that are more efficacious and have fewer side effects.

In a study published online in the April issue of JAMA Psychiatry, the JAMA Network journal, researchers used a comprehensive and unique approach to robustly identify genes and biological processes conferring risk for schizophrenia.

The researchers first used 21,953 subjects to examine over a million genetic markers. They then systematically collected results from other kinds of biological schizophrenia studies and combined all these results using a novel data integration approach.

The most promising genetic markers were tested again in a large collection of families with schizophrenia patients, a design that avoids pitfalls that have plagued genetic studies of schizophrenia in the past. The genes they identified after this comprehensive approach were found to have involvement in brain function, nerve cell development and immune response.

“Now that we have genes that are robustly associated with schizophrenia, we can begin to design much more specific experiments to understand how disruption of these genes may affect brain development and function,” said principal investigator Edwin van den Oord, Ph.D., professor and director of the Center for Biomarker Research and Personalized Medicine in the Department of Pharmacotherapy and Outcomes Science at the VCU School of Pharmacy.

“Also, some of these genes provide excellent targets for the development of new drugs,” he said.

One specific laboratory experiment currently underway at VCU to better understand the function of one of these genes, TCF4, is being led by Joseph McClay, Ph.D., a co-author on the study and assistant professor and laboratory director in the VCU Center for Biomarker Research and Personalized Medicine. TCF4 works by switching on other genes in the brain. McClay and colleagues are conducting a National Institutes of Health-funded study to determine all genes that are under the control of TCF4. By mapping the entire network, they aim to better understand how disruptions to TCF4 increase risk for schizophrenia.

“Our results also suggest that the novel data integration approach used in this study is a promising tool that potentially can be of great value in studies of a large variety of complex genetic disorders,” said lead author Karolina A. Aberg, Ph.D., research assistant professor and laboratory co-director of the Center for Biomarker Research and Personalized Medicine in the VCU School of Pharmacy.

(Image: iStockphoto)

Filed under schizophrenia genetic markers genes brain function immune response neuroscience science

159 notes

The subtle hallmarks of psychiatric illness can reveal themselves even remotely

Most people are so attuned to the nuances of social interaction that they can detect clues to mental illness while playing a strategy game with someone they have never met.

image

That was the finding of a team of scientists led by Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute. The researchers discovered that healthy people and those with borderline personality disorder displayed different patterns of behavior while playing an online strategy game, so much so that when healthy players played people with borderline personality disorder, they gave up on trying to predict what their partners would do next.

For their large neuroimaging study, the scientists used a multiround social interaction game, the investor-trustee game, to study the level of strategic thinking in 195 pairs of subjects. In each pair, one player played the investor and the other the trustee. The investor chose how much money to send the trustee, and the trustee in turn decided how much to return to the investor. Profit required the cooperation of both players.

“This classic tit-for-tat game allows us to probe people’s responses to the social gestures of others,” said Montague, who also directs the Computational Psychiatry Unit, an academic center that uses computational models to understand mental disease. “It further allows us to see how people form models of one another. These insights are important for understanding a range of mental illnesses, as the ability to infer other people’s intentions is an essential component of healthy cognition.”

The scientists classified the investors according to varying levels of strategic depth of thought. The healthy subjects fell into three categories: about half simply responded to the amount the other player sent; about one-quarter built a model of their partner’s behavior; and the remaining quarter considered not just their model of their partner, but also their partner’s models of them. 

Not surprisingly, the depth-of-thought style of play correlated with success, with the players who looked deeper into interactions making considerably more money than those who played at a shallow level.

When healthy subjects played people with borderline personality disorder, though, they were far less likely to exhibit depth of thought.

“People with borderline personality disorder are characterized by their unstable relationships, and when they play this game, they tend to break cooperation,” said Montague. “The healthy subjects picked up on the erratic behavior, likely without even realizing it, and far fewer played strategically.”

Notably, the functional magnetic resonance imaging of the subjects’ brains revealed that each category of player showed distinct neural correlates of learning signals associated with differing depths of thought. The scientists used hyperscanning, a technique Montague invented that enables subjects in different brain scanners to interact in real time, regardless of geography. Hyperscanning allows scientists to eavesdrop on brain activity during social exchanges in scanners, whether across the hallway or across the world.

“We’re always modeling other people, and our brains have a substantial amount of neural tissue devoted to pondering our interactions with other people,” Montague said. “This study is a start to turning neural signals into numbers – not just theory-of-mind arguments, but actual numbers. And when we can do that across thousands of people, we should start to gain insights into psychopathologies – what circuits are involved, what brain regions are engaged, and how injuries, congenital disorders, and genetic defects might play into psychiatric illness.”

Montague believes the study represents a significant contribution to the field of computational psychiatry, which seeks to bring computational clout to efforts to understand mental dysfunction. “Traditional psychiatric categories are useful yet incomplete,” said Montague, who delivered a TEDGlobal talk on the growing field of computational psychiatry last year. “Computational psychiatry enables us to redefine with a new lexicon – a mathematical one – the standard ways we think about mental illness.”

Computationally based insights may one day help psychiatry achieve better precision in diagnosis and treatment, Montague said. But until scientists have the right instruments, they cannot even begin to make those connections.

“The exquisite sensitivity that most people have to social gestures gives us a valuable opening,” Montague said. “We’re hoping to invent a tool – almost a human inkblot test – for identifying and characterizing mental disorders in which social interactions go awry.”

(Source: vtnews.vt.edu)

Filed under mental illness social interaction borderline personality disorder strategic thinking neuroimaging psychology neuroscience science

238 notes

Smell of rosemary ‘may improve memory’

The smell of rosemary could boost your memory, researchers said.

Aroma of essential oil from the herb could improve memory in healthy adults, according to researchers from the University of Northumbria. The smell may enhance the ability to remember events and to remember to complete tasks at particular times, they said.
A group of 66 people were given memory tests in either a rosemary-scented room or another room with no scent. Participants were tasked various tests to assess their memory functions, including finding hidden objects and passing specified objects to researchers at a particular time.
The results, presented at the British Psychological Society’s annual conference in Harrogate, showed that participants in the rosemary-scented room performed better on the prospective memory tasks than those in the room with no smell.
"We wanted to build on our previous research that indicated rosemary aroma improved long-term memory and mental arithmetic," said author Dr Mark Moss. "In this study we focused on prospective memory, which involves the ability to remember events that will occur in the future and to remember to complete tasks at particular times. This is critical for everyday functioning. For example, when someone needs to remember to post a birthday card or to take medication at a particular time."
Co-author Jemma McCready, added: “These findings may have implications for treating individuals with memory impairments.
"It supports our previous research indicating that the aroma of rosemary essential oil can enhance cognitive functioning in healthy adults, here extending to the ability to remember events and to complete tasks in the future.
"Remembering when and where to go and for what reasons underpins everything we do, and we all suffer minor failings that can be frustrating and sometimes dangerous. Further research is needed to investigate if this treatment is useful for older adults who have experienced memory decline."

Smell of rosemary ‘may improve memory’

The smell of rosemary could boost your memory, researchers said.

Aroma of essential oil from the herb could improve memory in healthy adults, according to researchers from the University of Northumbria. The smell may enhance the ability to remember events and to remember to complete tasks at particular times, they said.

A group of 66 people were given memory tests in either a rosemary-scented room or another room with no scent. Participants were tasked various tests to assess their memory functions, including finding hidden objects and passing specified objects to researchers at a particular time.

The results, presented at the British Psychological Society’s annual conference in Harrogate, showed that participants in the rosemary-scented room performed better on the prospective memory tasks than those in the room with no smell.

"We wanted to build on our previous research that indicated rosemary aroma improved long-term memory and mental arithmetic," said author Dr Mark Moss. "In this study we focused on prospective memory, which involves the ability to remember events that will occur in the future and to remember to complete tasks at particular times. This is critical for everyday functioning. For example, when someone needs to remember to post a birthday card or to take medication at a particular time."

Co-author Jemma McCready, added: “These findings may have implications for treating individuals with memory impairments.

"It supports our previous research indicating that the aroma of rosemary essential oil can enhance cognitive functioning in healthy adults, here extending to the ability to remember events and to complete tasks in the future.

"Remembering when and where to go and for what reasons underpins everything we do, and we all suffer minor failings that can be frustrating and sometimes dangerous. Further research is needed to investigate if this treatment is useful for older adults who have experienced memory decline."

Filed under rosemary memory prospective memory performance psychology neuroscience science

44 notes

System Provides Clear Brain Scans of Awake, Unrestrained Mice
Setting a mouse free to roam might alarm most people, but not so for nuclear imaging researchers from the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility, Oak Ridge National Laboratory, Johns Hopkins Medical School and the University of Maryland who have developed a new imaging system for mouse brain studies.
Scientists use dynamic imaging of mice to follow changes in brain chemistry caused by the progression of disease or the application of a drug as an effective research tool for developing better ways to diagnose disease and formulate better treatments. In most nuclear imaging studies, laboratory mice are typically drugged or bound in place so that their brains can be studied. However, the results of such research can be tainted by subjecting the mice to such chemical or physical restraints, complicating studies of Alzheimer’s, dementia and Parkinson’s disease.
But for their nuclear medicine imaging studies, the researchers from Jefferson Lab, Oak Ridge, Johns Hopkins and Maryland used a new system they developed to acquire functional images of the brains of conscious, unrestrained and un-anesthetized mice. The so-called AwakeSPECT system was then used to document for the first time the effects of anesthesia on the action of a dopamine transporter imaging compound in the mouse brain. Such dopamine transporter imaging compounds are used for Alzheimer’s, dementia and Parkinson’s disease studies.
SPECT is Single-Photon Emission Computed Tomography. In this technique, a radionuclide is injected, where it collects in specific areas of the brain by function. The radionuclide emits gamma rays (single photons) that are collected by a detector in separate scans from many different angles. The scans are combined in an algorithm to produce a three-dimensional image.
"The AwakeSPECT system does regular SPECT imaging of mice. SPECT is a nuclear medicine imaging technique that’s used in humans for various types of diagnostic studies. It’s also used in animal studies to facilitate the development and understanding of disease physiology," says Jefferson Lab’s Drew Weisenberger, who led the multi-institutional collaboration and directed the SPECT system development effort.
Weisenberger says the AwakeSPECT system uses two Jefferson Lab custom-built gamma cameras to image the radionuclide, as well as a system that processes the data to produce the three-dimensional images. An infrared camera system developed at Oak Ridge National Laboratory tracks movement of the mouse. Finally, a commercially available CT system provides additional anatomical information.
Researchers at Johns Hopkins Medical School, led by Martin Pomper, conducted the first mouse imaging studies with the new system. To prepare a mouse for imaging with AwakeSPECT, it is first tagged with three markers that are glued to its head for the infrared system to track. Once the radionuclide is injected, the mouse can then be imaged as it rests in a homey, burrow-like, clear tube. The beauty of the system is that it doesn’t require that the mouse (or potentially people, at a later stage) remain motionless. Two patents have been awarded to Jefferson Lab for the innovative technology associated with this system.
"We developed this system that, while acquiring SPECT images, uses infrared cameras that track the location and pose of the head. We use that information to then computationally remove motion artifacts from our SPECT imaging," he says.
In this recent study published online in The Journal of Nuclear Medicine, the researchers showed that AwakeSPECT can obtain detailed, functional images of the brain of a conscious mouse, as the mouse moves around freely in an enclosure.
Researchers also imaged the action of a drug often used to image dopamine transport in the brain, 123I-ioflupane, in awake and anesthetized mice. They found that the drug was absorbed less than half as well in awake mice, showing that the use of anesthetic could potentially confound drug uptake studies.
"We’ve shown the technology works. Now, you just have to make it a tool that more people will readily use" Weisenberger says.
Weisenberger says the next step is to improve the AwakeSPECT imager by upgrading the infrared tracking system, using newer technology for the SPECT imager, and by making the system more intuitive for animal researchers to operate.

System Provides Clear Brain Scans of Awake, Unrestrained Mice

Setting a mouse free to roam might alarm most people, but not so for nuclear imaging researchers from the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility, Oak Ridge National Laboratory, Johns Hopkins Medical School and the University of Maryland who have developed a new imaging system for mouse brain studies.

Scientists use dynamic imaging of mice to follow changes in brain chemistry caused by the progression of disease or the application of a drug as an effective research tool for developing better ways to diagnose disease and formulate better treatments. In most nuclear imaging studies, laboratory mice are typically drugged or bound in place so that their brains can be studied. However, the results of such research can be tainted by subjecting the mice to such chemical or physical restraints, complicating studies of Alzheimer’s, dementia and Parkinson’s disease.

But for their nuclear medicine imaging studies, the researchers from Jefferson Lab, Oak Ridge, Johns Hopkins and Maryland used a new system they developed to acquire functional images of the brains of conscious, unrestrained and un-anesthetized mice. The so-called AwakeSPECT system was then used to document for the first time the effects of anesthesia on the action of a dopamine transporter imaging compound in the mouse brain. Such dopamine transporter imaging compounds are used for Alzheimer’s, dementia and Parkinson’s disease studies.

SPECT is Single-Photon Emission Computed Tomography. In this technique, a radionuclide is injected, where it collects in specific areas of the brain by function. The radionuclide emits gamma rays (single photons) that are collected by a detector in separate scans from many different angles. The scans are combined in an algorithm to produce a three-dimensional image.

"The AwakeSPECT system does regular SPECT imaging of mice. SPECT is a nuclear medicine imaging technique that’s used in humans for various types of diagnostic studies. It’s also used in animal studies to facilitate the development and understanding of disease physiology," says Jefferson Lab’s Drew Weisenberger, who led the multi-institutional collaboration and directed the SPECT system development effort.

Weisenberger says the AwakeSPECT system uses two Jefferson Lab custom-built gamma cameras to image the radionuclide, as well as a system that processes the data to produce the three-dimensional images. An infrared camera system developed at Oak Ridge National Laboratory tracks movement of the mouse. Finally, a commercially available CT system provides additional anatomical information.

Researchers at Johns Hopkins Medical School, led by Martin Pomper, conducted the first mouse imaging studies with the new system. To prepare a mouse for imaging with AwakeSPECT, it is first tagged with three markers that are glued to its head for the infrared system to track. Once the radionuclide is injected, the mouse can then be imaged as it rests in a homey, burrow-like, clear tube. The beauty of the system is that it doesn’t require that the mouse (or potentially people, at a later stage) remain motionless. Two patents have been awarded to Jefferson Lab for the innovative technology associated with this system.

"We developed this system that, while acquiring SPECT images, uses infrared cameras that track the location and pose of the head. We use that information to then computationally remove motion artifacts from our SPECT imaging," he says.

In this recent study published online in The Journal of Nuclear Medicine, the researchers showed that AwakeSPECT can obtain detailed, functional images of the brain of a conscious mouse, as the mouse moves around freely in an enclosure.

Researchers also imaged the action of a drug often used to image dopamine transport in the brain, 123I-ioflupane, in awake and anesthetized mice. They found that the drug was absorbed less than half as well in awake mice, showing that the use of anesthetic could potentially confound drug uptake studies.

"We’ve shown the technology works. Now, you just have to make it a tool that more people will readily use" Weisenberger says.

Weisenberger says the next step is to improve the AwakeSPECT imager by upgrading the infrared tracking system, using newer technology for the SPECT imager, and by making the system more intuitive for animal researchers to operate.

Filed under AwakeSPECT brain scans gamma rays nuclear imaging spect imaging neuroscience science

280 notes

How ‘free will’ is implemented in the brain and is it possible to intervene in the process?
Researchers have been able to identify the precise moment when a network of nerve cells (neurons) in the brain creates the signal to perform an action, before a person is even aware of deciding to take that action. Now they are building on this work to make initial attempts to interfere with consciously made decisions by decoding the pattern of brain activity in real time before an action is taken.
Professor Gabriel Kreiman will tell the British Neuroscience Association Festival of Neuroscience (BNA2013) today (Tuesday): “This could be useful to help elucidate the mechanistic basis by which neuronal circuits orchestrate ‘free’ will.”
Normally it is difficult to research the activity of neurons in the brain because it involves implanting electrodes – an invasive procedure that would not be ethical to do simply for scientific curiosity alone. However, Prof Kreiman, who is an associate professor at the Harvard Medical School, Boston, USA, together with neurosurgeon Itzhak Fried from University of California at Los Angeles (UCLA), had a rare opportunity to record the activity of over 1,000 neurons in two areas of the brain, the frontal and temporal lobes, when patients with epilepsy had had electrodes implanted to try to identify the source of their seizures.
“These patients have epilepsy that does not respond to drug treatment; Itzhak Fried implanted their brains with very thin electrodes (microwires) of about 40 micrometres in diameter in order to localise the focus of a seizure onset for a potential surgical procedure to alleviate the seizures. The microwires capture the extracellular electrical activity of neurons. Patients stay in the hospital for about a week. During this time, we have a unique opportunity to interrogate the activity of neurons and neural ensembles in the human brain at high spatial and temporal resolution,” explains Prof Kreiman.
The researchers asked the patients to move their index finger to click a computer mouse and to report when they made that decision. “Based on the activity of small groups of neurons, we could predict this decision several hundreds of milliseconds and, in some cases, seconds before the action. In a variant of the main experiment, the patients were allowed to choose whether to use their left hand or right hand and we showed that we could also predict this decision.”
The researchers found that an increasing number of neurons in two specific brain regions started to become active before the person was aware of their decision to move their finger. The two regions were the supplementary motor area, which is thought to be the area for preparing to perform motor actions, and the anterior cingulate cortex, which has a number of roles including the signalling processes associated with reward.
Prof Kreiman believes that these results provide initial steps to elucidate the mechanism for the emergence of conscious will in humans. “The activity of multiple neurons in extremely simple neural circuits precedes volition – in this case the decision to make a simple movement – until a threshold is crossed and the action is taken,” he will say.
Knowing when this threshold will be reached could enable researchers to see whether it is possible to interfere and maybe change the decision before any action is taken. “We are now making initial attempts to interfere with volition by decoding the neural responses in real time and asking whether there is a ‘point of no return’ in the hierarchical chain of command from unconscious decisions to volition to action,” says Prof Kreiman.
How these findings fit into the concept of “free will” is more complicated. “The concept of free will has been debated for millennia. Ultimately, current scientific understanding strongly suggests that ‘will’ has to be orchestrated by neurons in our brains (as opposed to magic or religious beliefs or other notions). We have provided initial steps to try to disentangle which neurons are involved, to show where and how ‘will’ or ‘volition’ could be implemented in the brain.
“Our work does not say that life is predetermined, that we can predict the future and that we can, for instance, determine what you are going to eat for lunch two weeks from now, or who you are going to marry.
“We are saying that volition (like other aspects of consciousness) is a brain phenomenon that is instantiated by physical hardware, i.e. neurons.  We are making claims about volition for very simple tasks, such as moving an index finger or choosing which hand to use, over scales of hundreds of milliseconds to seconds. Nothing more. Nothing less.
“Ultimately, our actions depend on multiple variables, several of which are external (for instance, it rains, hence, I will take my umbrella) and cannot be decoded or predicted from neurons. However, our volitional decision of whether to take the red umbrella or the blue one today – ultimately perhaps the real core of free will – is dictated by neurons,” Prof Kreiman will conclude.

How ‘free will’ is implemented in the brain and is it possible to intervene in the process?

Researchers have been able to identify the precise moment when a network of nerve cells (neurons) in the brain creates the signal to perform an action, before a person is even aware of deciding to take that action. Now they are building on this work to make initial attempts to interfere with consciously made decisions by decoding the pattern of brain activity in real time before an action is taken.

Professor Gabriel Kreiman will tell the British Neuroscience Association Festival of Neuroscience (BNA2013) today (Tuesday): “This could be useful to help elucidate the mechanistic basis by which neuronal circuits orchestrate ‘free’ will.”

Normally it is difficult to research the activity of neurons in the brain because it involves implanting electrodes – an invasive procedure that would not be ethical to do simply for scientific curiosity alone. However, Prof Kreiman, who is an associate professor at the Harvard Medical School, Boston, USA, together with neurosurgeon Itzhak Fried from University of California at Los Angeles (UCLA), had a rare opportunity to record the activity of over 1,000 neurons in two areas of the brain, the frontal and temporal lobes, when patients with epilepsy had had electrodes implanted to try to identify the source of their seizures.

“These patients have epilepsy that does not respond to drug treatment; Itzhak Fried implanted their brains with very thin electrodes (microwires) of about 40 micrometres in diameter in order to localise the focus of a seizure onset for a potential surgical procedure to alleviate the seizures. The microwires capture the extracellular electrical activity of neurons. Patients stay in the hospital for about a week. During this time, we have a unique opportunity to interrogate the activity of neurons and neural ensembles in the human brain at high spatial and temporal resolution,” explains Prof Kreiman.

The researchers asked the patients to move their index finger to click a computer mouse and to report when they made that decision. “Based on the activity of small groups of neurons, we could predict this decision several hundreds of milliseconds and, in some cases, seconds before the action. In a variant of the main experiment, the patients were allowed to choose whether to use their left hand or right hand and we showed that we could also predict this decision.”

The researchers found that an increasing number of neurons in two specific brain regions started to become active before the person was aware of their decision to move their finger. The two regions were the supplementary motor area, which is thought to be the area for preparing to perform motor actions, and the anterior cingulate cortex, which has a number of roles including the signalling processes associated with reward.

Prof Kreiman believes that these results provide initial steps to elucidate the mechanism for the emergence of conscious will in humans. “The activity of multiple neurons in extremely simple neural circuits precedes volition – in this case the decision to make a simple movement – until a threshold is crossed and the action is taken,” he will say.

Knowing when this threshold will be reached could enable researchers to see whether it is possible to interfere and maybe change the decision before any action is taken. “We are now making initial attempts to interfere with volition by decoding the neural responses in real time and asking whether there is a ‘point of no return’ in the hierarchical chain of command from unconscious decisions to volition to action,” says Prof Kreiman.

How these findings fit into the concept of “free will” is more complicated. “The concept of free will has been debated for millennia. Ultimately, current scientific understanding strongly suggests that ‘will’ has to be orchestrated by neurons in our brains (as opposed to magic or religious beliefs or other notions). We have provided initial steps to try to disentangle which neurons are involved, to show where and how ‘will’ or ‘volition’ could be implemented in the brain.

“Our work does not say that life is predetermined, that we can predict the future and that we can, for instance, determine what you are going to eat for lunch two weeks from now, or who you are going to marry.

“We are saying that volition (like other aspects of consciousness) is a brain phenomenon that is instantiated by physical hardware, i.e. neurons.  We are making claims about volition for very simple tasks, such as moving an index finger or choosing which hand to use, over scales of hundreds of milliseconds to seconds. Nothing more. Nothing less.

“Ultimately, our actions depend on multiple variables, several of which are external (for instance, it rains, hence, I will take my umbrella) and cannot be decoded or predicted from neurons. However, our volitional decision of whether to take the red umbrella or the blue one today – ultimately perhaps the real core of free will – is dictated by neurons,” Prof Kreiman will conclude.

Filed under brain nerve cells free will neural activity decisions neural responses BNA2013 neuroscience science

free counters