Neuroscience

Articles and news from the latest research reports.

Posts tagged prefrontal cortex

102 notes

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Filed under decision making predictions brain activity learning prefrontal cortex neuroscience science

119 notes

Motor Excitability predicts Working Memory
Humans with a high motor excitability have a better working memory than humans with a low excitability. This was shown in a study conducted by scientists from the Transfacultary Research Platform at the University of Basel. By measuring the motor excitability, conclusions can be drawn as to the general cortical excitability – as well as to cognitive performance.
Working memory allows the temporary storage of information such as memorizing a phone number for a short period of time. Studies in animals have shown that working memory processes among others depend on the excitability of neurons in the prefrontal cortex. Moreover, there is evidence that motor neuronal excitability might be related to the neuronal excitability of other cortical regions. Researchers from the Psychiatric University Clinics (UPK Basel) and the Faculty of Psychology in Basel have now studied if the excitability of the motor cortex correlates with working memory performance– results were positive.
«The motor cortical excitability can be easily studied with transcranial magnetic stimulation», says Nathalie Schicktanz, doctoral student and first author of the study. During this procedure, electromagnetic impulses with increasing intensity are applied over the motor cortex. For subjects with high motor excitability already weak impulses are sufficient to trigger certain muscles – such as those of the hand – to show a visible twitch.
Conclusions for other cortical regionsIn the present study, that included 188 healthy young subjects, the scientists were able to show that subjects with a high motor excitability had increased working memory performance as compared to subjects with a low excitability. «By measuring the excitability of the motor cortex, conclusions can be drawn as to the excitability of other cortical areas», says Schicktanz.
«The findings help us to understand the importance of neuronal excitability for cognitive processes in humans», adds Dr. Kyrill Schwegler, co-author of the study. The results might also have important clinical implications, as working memory deficits are a component of many neuropsychiatric disorders, such as schizophrenia or attention deficit hyperactivity disorder. In a next step, the scientists plan to study the relation between neuronal excitability and memory on a molecular level.
The study is part of a project lead by Prof. Dominique de Quervain and Prof. Andreas Papassotiropoulos. The project uses transcranial magnetic stimulation to study the cognitive functions in humans. The goal is to identify the neurobiological and molecular mechanisms of human memory.

Motor Excitability predicts Working Memory

Humans with a high motor excitability have a better working memory than humans with a low excitability. This was shown in a study conducted by scientists from the Transfacultary Research Platform at the University of Basel. By measuring the motor excitability, conclusions can be drawn as to the general cortical excitability – as well as to cognitive performance.

Working memory allows the temporary storage of information such as memorizing a phone number for a short period of time. Studies in animals have shown that working memory processes among others depend on the excitability of neurons in the prefrontal cortex. Moreover, there is evidence that motor neuronal excitability might be related to the neuronal excitability of other cortical regions. Researchers from the Psychiatric University Clinics (UPK Basel) and the Faculty of Psychology in Basel have now studied if the excitability of the motor cortex correlates with working memory performance– results were positive.

«The motor cortical excitability can be easily studied with transcranial magnetic stimulation», says Nathalie Schicktanz, doctoral student and first author of the study. During this procedure, electromagnetic impulses with increasing intensity are applied over the motor cortex. For subjects with high motor excitability already weak impulses are sufficient to trigger certain muscles – such as those of the hand – to show a visible twitch.

Conclusions for other cortical regions
In the present study, that included 188 healthy young subjects, the scientists were able to show that subjects with a high motor excitability had increased working memory performance as compared to subjects with a low excitability. «By measuring the excitability of the motor cortex, conclusions can be drawn as to the excitability of other cortical areas», says Schicktanz.

«The findings help us to understand the importance of neuronal excitability for cognitive processes in humans», adds Dr. Kyrill Schwegler, co-author of the study. The results might also have important clinical implications, as working memory deficits are a component of many neuropsychiatric disorders, such as schizophrenia or attention deficit hyperactivity disorder. In a next step, the scientists plan to study the relation between neuronal excitability and memory on a molecular level.

The study is part of a project lead by Prof. Dominique de Quervain and Prof. Andreas Papassotiropoulos. The project uses transcranial magnetic stimulation to study the cognitive functions in humans. The goal is to identify the neurobiological and molecular mechanisms of human memory.

Filed under working memory prefrontal cortex transcranial magnetic stimulation motor cortex neurons neuronal excitability

178 notes

Scientists improve human self-control through electrical brain stimulation

If you have ever said or done the wrong thing at the wrong time, you should read this. Neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) and the University of California, San Diego, have successfully demonstrated a technique to enhance a form of self-control through a novel form of brain stimulation.

image

Study participants were asked to perform a simple behavioral task that required the braking/slowing of action – inhibition – in the brain. In each participant, the researchers first identified the specific location for this brake in the prefrontal region of the brain. Next, they increased activity in this brain region using stimulation with brief and imperceptible electrical charges. This led to increased braking – a form of enhanced self-control.

This proof-of-principle study appears in the Dec. 11 issue of The Journal of Neuroscience and its methods may one day be useful for treating attention deficit hyperactivity disorder (ADHD), Tourette’s syndrome and other severe disorders of self-control.

“There is a circuit in the brain for inhibiting or braking responses,” said Nitin Tandon, M.D., the study’s senior author and associate professor in The Vivian L. Smith Department of Neurosurgery at the UTHealth Medical School. “We believe we are the first to show that we can enhance this braking system with brain stimulation.”

A computer stimulated the prefrontal cortex exactly when braking was needed. This was done using electrodes implanted directly on the brain surface.

When the test was repeated with stimulation of a brain region outside the prefrontal cortex, there was no effect on behavior, showing the effect to be specific to the prefrontal braking system.

This was a double-blind study, meaning that participants and scientists did not know when or where the charges were being administered.

The method of electrical stimulation was novel in that it apparently enhanced prefrontal function, whereas other human brain stimulation studies mostly disrupt normal brain activity. This is the first published human study to enhance prefrontal lobe function using direct electrical stimulation, the researchers report.

The study involved four volunteers with epilepsy who agreed to participate while being monitored for seizures at the Mischer Neuroscience Institute at Memorial Hermann-Texas Medical Center (TMC). Stimulation enhanced braking in all four participants.

Tandon has been working on self-control research with researchers at the University of California, San Diego, for five years. “Our daily life is full of occasions when one must inhibit responses. For example, one must stop speaking when it’s inappropriate to the social context and stop oneself from reaching for extra candy,” said Tandon, who is a neurosurgeon with the Mischer Neuroscience Institute at Memorial Hermann-TMC. 

The researchers are quick to point out that while their results are promising, they do not yet point to the ability to improve self-control in general. In particular, this study does not show that direct electrical stimulation is a realistic option for treating human self-control disorders such as obsessive-compulsive disorder, Tourette’s syndrome and borderline personality disorder. Notably, direct electrical stimulation requires an invasive surgical procedure, which is now used only for the localization and treatment of severe epilepsy.

(Source: uth.edu)

Filed under brain stimulation electrical stimulation DBS prefrontal cortex neuroscience science

273 notes

Dads: How important are they?

Even with today’s technology, it still takes both a male and a female to make a baby. But is it important for both parents to raise that child? Many studies have outlined the value of a mother, but few have clearly defined the importance of a father, until now. New findings from the Research Institute of the McGill University Health Centre (RI-MUHC) show that the absence of a father during critical growth periods, leads to impaired social and behavioural abilities in adults. This research, which was conducted using mice, was published today in the journal Cerebral Cortex. It is the first study to link father absenteeism with social attributes and to correlate these with physical changes in the brain.

image

“Although we used mice, the findings are extremely relevant to humans,” says senior author Dr. Gabriella Gobbi, a researcher of the Mental Illness and Addiction Axis at the RI-MUHC and an associate professor at the Faculty of Medicine at McGill University. “We used California mice which, like in some human populations, are monogamous and raise their offspring together.” 

“Because we can control their environment, we can equalize factors that differ between them,” adds first author, Francis Bambico, a former student of Dr. Gobbi at McGill and now a post-doc at the Centre for Addiction and Mental Health (CAMH) in Toronto. “Mice studies in the laboratory may therefore be clearer to interpret than human ones, where it is impossible to control all the influences during development.”

Dr. Gobbi and her colleagues compared the social behaviour and brain anatomy of mice that had been raised with both parents to those that had been raised only by their mothers. Mice raised without a father had abnormal social interactions and were more aggressive than counterparts raised with both parents. These effects were stronger for female offspring than for their brothers. Females raised without fathers also had a greater sensitivity to the stimulant drug, amphetamine. 

“The behavioural deficits we observed are consistent with human studies of children raised without a father,” says Dr. Gobbi, who is also a psychiatrist at the MUHC. “These children have been shown to have an increased risk for deviant behaviour and in particular, girls have been shown to be at risk for substance abuse. This suggests that these mice are a good model for understanding how these effects arise in humans.” 

In pups deprived of fathers, Dr. Gobbi’s team also identified defects in the mouse prefrontal cortex, a part of the brain that helps control social and cognitive activity, which is linked to the behaviourial deficits.

“This is the first time research findings have shown that paternal deprivation during development affects the neurobiology of the offspring,” says Dr. Gobbi. These results should incite researchers to look more deeply into the role of fathers during critical stages of growth and suggest that both parents are important in children’s mental health development.

(Source: muhc.ca)

Filed under prefrontal cortex social interaction paternal deprivation social behavior psychology neuroscience science

171 notes

Increased Brain Activity May Hold Key to Eliminating PTSD

In a new paper published in the current issue of Neuron, McLean Hospital and Harvard Medical School researchers report that increased activity in the medial prefrontal cortex (mPFC) of the brain is linked to decreased activity in the amygdala, the portion of the brain used in the creation of memories of events that scared those exposed.

image

According to author Vadim Bolshakov, PhD, director of the Cellular Neurobiology Laboratory at McLean and professor at Harvard Medical School, this finding is significant in that it could lead to better methods to prevent PTSD.

"A single exposure to something traumatic or scary can be enough to create a fear memory—causing someone to expect and be afraid in similar situations in the future," said Bolshakov. "What we’re seeing is that we may one day be able to prevent those fear memories."

Bolshakov and his colleagues tested their theory using animal models. Dividing the mice into two groups, some were taught to fear an auditory stimulus while in others fear memory was extinguished Increased activation of mPFC in extinguished animals led to inhibition of the amygdala and significant decreases in fear responses.

"For example, if a sound ended with an extremely loud shriek, a subject would come to expect that scary noise at the end of the sound," explained Bolshakov. "What we found was when we suppressed the fear memory by decreasing activity in the amygdala, the subjects were not afraid of the end of the auditory stimulus any longer."

Bolshakov notes that this work could have serious implications for the treatment of a number of conditions including PTSD.

"While there is still a great deal of research that needs to be done before our work can be translated to clinical trials, what we are showing has the potential to ensure that individuals exposed to trauma were not haunted by the conditions surrounding their initial stressor."

(Source: mclean.harvard.edu)

Filed under fear prefrontal cortex PTSD brain activity amygdala memory psychology neuroscience science

207 notes

Scientists Pinpoint Cell Type and Brain Region Affected by Gene Mutations in Autism
A team led by UC San Francisco scientists has identified the disruption of a single type of cell – in a particular brain region and at a particular time in brain development – as a significant factor in the emergence of autism.
The finding, reported in the Nov. 21 issue of Cell, was made with techniques developed only within the last few years, and marks a turning point in autism spectrum disorders (ASDs) research.
Large-scale gene sequencing projects are revealing hundreds of autism-associated genes, and scientists have begun to leverage new methods to decipher how mutations in these disparate genes might converge to exert their effects in the developing brain.
The new research focused on just nine genes, those most strongly associated with autism in recent sequencing studies, and investigated their effects using precise maps of gene expression during human brain development.
Led by Jeremy Willsey, a graduate student in the laboratory of senior author Matthew W. State, MD, PhD, chair of the UCSF Department of Psychiatry, the group showed that this set of genes contributes to abnormalities in brain cells known as cortical projection neurons in the deepest layers of the developing prefrontal cortex during the middle period of fetal development.
Though a range of developmental scenarios in multiple brain regions are surely at work in ASDs, the ability to place these specific genetic mutations in one specific set of cells – among hundreds of cell types in the brain, and at a specific time point in human development – is a critical step in beginning to understand how autism comes about.
“Given the small subset of autism genes we studied, I had no expectation that we would see the degree of spatiotemporal convergence that we saw,” said State, an international authority on the genetics of neurodevelopmental disorders.
“This strongly suggests that though there are hundreds of autism risk genes, the number of underlying biological mechanisms will be far fewer. This is a very important clue to advance precision medicine for autism toward the development of personalized and targeted therapies.”
Complex Genetic Architecture of ASDs
ASDs, marked by deficits in social interaction and language development, as well as by repetitive behaviors and/or restricted interests, are known to have a strong genetic component.
But these disorders are exceedingly complex, with considerable variation in symptoms and severity, and there does not appear to be a small collection of mutations widely shared among all affected individuals that always lead to ASDs.
Instead, with the rise of new sequencing methods over the past several years, researchers have identified many rare, non-inherited, spontaneous mutations that appear to act in combination with other genetic and non-genetic factors to cause ASDs. According to some estimates, mutations in as many as 1,000 genes could play a role in the development of these disorders.
While researchers have been heartened that specific genes are now rapidly being tied to ASDs, State said the complex genetic architecture of ASDs is also proving to be challenging.
“If there are 1,000 genes in the population that can contribute to risk in varying degrees and each has multiple developmental functions, it is not immediately obvious how to move forward to determine what is specifically related to autism. And without this, it is very difficult to think about how to develop new and better medications,” he said.
Focusing on Nine Genes
To begin to grapple with those questions, the researchers involved in the new study first selected as “seeds” the nine genes that have been most strongly tied to ASDs in recent sequencing research from their labs and others.
Importantly, these nine genes were chosen solely because of the statistical evidence for a relationship to ASDs, not because their function was known to fit a theory of the cause of ASDs. “We asked where the leads take us, without any preconceived idea about where they should take us,” said State.
The team then took advantage of BrainSpan, a digital atlas assembled by a large research consortium, including co-author Nenad Šestan, MD, PhD, and colleagues at Yale School of Medicine. Based on donated brain specimens, BrainSpan documents how and where genes are expressed in the human brain over the lifespan.
The scientists, who also included Bernie Devlin, PhD, of The University of Pittsburgh School of Medicine; Kathryn Roeder, PhD, of Carnegie-Mellon University; and James Noonan, PhD, of Yale School of Medicine, used this tool to investigate when and where the nine seed genes join up with other genes in “co-expression networks” to wire up the brain or maintain its function.
The resulting co-expression networks were then tested using a variety of pre-determined criteria to see if they showed additional evidence of being related to ASDs. Once this link was established, the authors were then able to home in on where in the brain and when in development these networks were localizing, which proved to be in cortical projection neurons found in layers 5 and 6 of the prefrontal cortex, and during a time period spanning 10 to 24 weeks after conception. Notably, a study using different methods and published in the same issue of Cell also implicates cortical projection neurons in ASDs.
“To see these gene networks as highly connected as they are, as convergent as they are, is quite amazing,” said Willsey “An important outcome of this study is that for the first time it gives us the ability to design targeted experiments based on a strong idea about when and where in the brain we should be looking at specific genes with specific mutations.”
In addition to its importance in ASD research, State sees the new work as a reflection of the tremendous value of “big science” efforts, such as large-scale collaborative genomic studies and the creation of foundational resources such as the BrainSpan atlas.
“We couldn’t have done this even two years ago,” State said, “because we didn’t have the key ingredients: a set of unbiased autism genes that we have confidence in, and a map of the landscape of the developing human brain. This work combines large-scale ‘-omics’ data sets to pivot into a deeper understanding of the relationship between complex genetics and biology.”

Scientists Pinpoint Cell Type and Brain Region Affected by Gene Mutations in Autism

A team led by UC San Francisco scientists has identified the disruption of a single type of cell – in a particular brain region and at a particular time in brain development – as a significant factor in the emergence of autism.

The finding, reported in the Nov. 21 issue of Cell, was made with techniques developed only within the last few years, and marks a turning point in autism spectrum disorders (ASDs) research.

Large-scale gene sequencing projects are revealing hundreds of autism-associated genes, and scientists have begun to leverage new methods to decipher how mutations in these disparate genes might converge to exert their effects in the developing brain.

The new research focused on just nine genes, those most strongly associated with autism in recent sequencing studies, and investigated their effects using precise maps of gene expression during human brain development.

Led by Jeremy Willsey, a graduate student in the laboratory of senior author Matthew W. State, MD, PhD, chair of the UCSF Department of Psychiatry, the group showed that this set of genes contributes to abnormalities in brain cells known as cortical projection neurons in the deepest layers of the developing prefrontal cortex during the middle period of fetal development.

Though a range of developmental scenarios in multiple brain regions are surely at work in ASDs, the ability to place these specific genetic mutations in one specific set of cells – among hundreds of cell types in the brain, and at a specific time point in human development – is a critical step in beginning to understand how autism comes about.

“Given the small subset of autism genes we studied, I had no expectation that we would see the degree of spatiotemporal convergence that we saw,” said State, an international authority on the genetics of neurodevelopmental disorders.

“This strongly suggests that though there are hundreds of autism risk genes, the number of underlying biological mechanisms will be far fewer. This is a very important clue to advance precision medicine for autism toward the development of personalized and targeted therapies.”

Complex Genetic Architecture of ASDs

ASDs, marked by deficits in social interaction and language development, as well as by repetitive behaviors and/or restricted interests, are known to have a strong genetic component.

But these disorders are exceedingly complex, with considerable variation in symptoms and severity, and there does not appear to be a small collection of mutations widely shared among all affected individuals that always lead to ASDs.

Instead, with the rise of new sequencing methods over the past several years, researchers have identified many rare, non-inherited, spontaneous mutations that appear to act in combination with other genetic and non-genetic factors to cause ASDs. According to some estimates, mutations in as many as 1,000 genes could play a role in the development of these disorders.

While researchers have been heartened that specific genes are now rapidly being tied to ASDs, State said the complex genetic architecture of ASDs is also proving to be challenging.

“If there are 1,000 genes in the population that can contribute to risk in varying degrees and each has multiple developmental functions, it is not immediately obvious how to move forward to determine what is specifically related to autism. And without this, it is very difficult to think about how to develop new and better medications,” he said.

Focusing on Nine Genes

To begin to grapple with those questions, the researchers involved in the new study first selected as “seeds” the nine genes that have been most strongly tied to ASDs in recent sequencing research from their labs and others.

Importantly, these nine genes were chosen solely because of the statistical evidence for a relationship to ASDs, not because their function was known to fit a theory of the cause of ASDs. “We asked where the leads take us, without any preconceived idea about where they should take us,” said State.

The team then took advantage of BrainSpan, a digital atlas assembled by a large research consortium, including co-author Nenad Šestan, MD, PhD, and colleagues at Yale School of Medicine. Based on donated brain specimens, BrainSpan documents how and where genes are expressed in the human brain over the lifespan.

The scientists, who also included Bernie Devlin, PhD, of The University of Pittsburgh School of Medicine; Kathryn Roeder, PhD, of Carnegie-Mellon University; and James Noonan, PhD, of Yale School of Medicine, used this tool to investigate when and where the nine seed genes join up with other genes in “co-expression networks” to wire up the brain or maintain its function.

The resulting co-expression networks were then tested using a variety of pre-determined criteria to see if they showed additional evidence of being related to ASDs. Once this link was established, the authors were then able to home in on where in the brain and when in development these networks were localizing, which proved to be in cortical projection neurons found in layers 5 and 6 of the prefrontal cortex, and during a time period spanning 10 to 24 weeks after conception. Notably, a study using different methods and published in the same issue of Cell also implicates cortical projection neurons in ASDs.

“To see these gene networks as highly connected as they are, as convergent as they are, is quite amazing,” said Willsey “An important outcome of this study is that for the first time it gives us the ability to design targeted experiments based on a strong idea about when and where in the brain we should be looking at specific genes with specific mutations.”

In addition to its importance in ASD research, State sees the new work as a reflection of the tremendous value of “big science” efforts, such as large-scale collaborative genomic studies and the creation of foundational resources such as the BrainSpan atlas.

“We couldn’t have done this even two years ago,” State said, “because we didn’t have the key ingredients: a set of unbiased autism genes that we have confidence in, and a map of the landscape of the developing human brain. This work combines large-scale ‘-omics’ data sets to pivot into a deeper understanding of the relationship between complex genetics and biology.”

Filed under autism prefrontal cortex cortical projection neurons neurons genetics neuroscience science

97 notes

New hope for heavy smokers after study finds zapping their brains with magnetic pulses made it easier for them to quit
Heavy smokers could be helped to kick the habit by having their brains zapped with electromagnetic pulses, new research suggests.
Repeated use of a high frequency magnet to stimulate the brain helps some smokers quit for up to six months after treatment, an Israeli study found.
The smokers had already tried a range of treatments, from patches to psychotherapy, raising hopes that brain stimulation could be an effective alternative for those who had so far failed to kick the habit.
Abraham Zangen of Ben Gurion University told the annual meeting of the Society for Neuroscience in San Diego, California, that more than half the smokers given high-frequency magnetic pulses quit.
More than a third were still abstaining six months on.
'Our research shows us that we may actually be able to undo some of the changes to the brain caused by chronic smoking,' said Dr Zangen.
'We know that many smokers want to quit or smoke less and this could help put a dent in the number one cause of preventable deaths.'
Dr Zangen’s team recruited 115 heavy smokers aged between 21 and 70 who were interested in quitting but who had failed in doing so on at least two previous attempts.
They then split the smokers into three groups, giving them either high frequency repeated Transcranial Magnetic Stimulation (rTMS), low frequency rTMS, or placebo treatment for 13 days.
Repeated high frequency Transcranial Magnetic Stimulation (rTMS) is a non-invasive technique that uses magnetic fields to stimulate large areas of neurons in the brain.
The researchers focused on stimulating the prefrontal cortex and the insula, which are the two brain areas associated with nicotine addiction.
Before each session, Dr Zangen got one of his PhD students to light a cigarette and take a drag in front of half the smokers in each group to awaken their cravings.
This was to make sure the smokers’ attention was directed at their addiction and not some other craving, said Dr Zangen.
The results were striking. Nearly half - 44 per cent - of the smokers who received the cue before their rTMS session gave up immediately after the 13-day course, with 33 per cent still of the smokes six months later.
Overall, participants who received high frequency rTMS smoked less and were more likely to quit, with success rates four times that of the low frequency group and more than six times greater than the placebo group.
Dr Zangen’s team are now planning a much larger trial involving smokers in several countries, which is set to start in the next few months.
He told The Guardian: ‘It’s quite easy to quit for a few days, or even for a few weeks, but if we can help people quit for more than three months, then they are actually quite unlikely to relapse later on.’
Dr Zanger did reveal that he has a financial interest in the company which provided the Transcranial Magnetic Stimulation equipment used in the study.

New hope for heavy smokers after study finds zapping their brains with magnetic pulses made it easier for them to quit

Heavy smokers could be helped to kick the habit by having their brains zapped with electromagnetic pulses, new research suggests.

Repeated use of a high frequency magnet to stimulate the brain helps some smokers quit for up to six months after treatment, an Israeli study found.

The smokers had already tried a range of treatments, from patches to psychotherapy, raising hopes that brain stimulation could be an effective alternative for those who had so far failed to kick the habit.

Abraham Zangen of Ben Gurion University told the annual meeting of the Society for Neuroscience in San Diego, California, that more than half the smokers given high-frequency magnetic pulses quit.

More than a third were still abstaining six months on.

'Our research shows us that we may actually be able to undo some of the changes to the brain caused by chronic smoking,' said Dr Zangen.

'We know that many smokers want to quit or smoke less and this could help put a dent in the number one cause of preventable deaths.'

Dr Zangen’s team recruited 115 heavy smokers aged between 21 and 70 who were interested in quitting but who had failed in doing so on at least two previous attempts.

They then split the smokers into three groups, giving them either high frequency repeated Transcranial Magnetic Stimulation (rTMS), low frequency rTMS, or placebo treatment for 13 days.

Repeated high frequency Transcranial Magnetic Stimulation (rTMS) is a non-invasive technique that uses magnetic fields to stimulate large areas of neurons in the brain.

The researchers focused on stimulating the prefrontal cortex and the insula, which are the two brain areas associated with nicotine addiction.

Before each session, Dr Zangen got one of his PhD students to light a cigarette and take a drag in front of half the smokers in each group to awaken their cravings.

This was to make sure the smokers’ attention was directed at their addiction and not some other craving, said Dr Zangen.

The results were striking. Nearly half - 44 per cent - of the smokers who received the cue before their rTMS session gave up immediately after the 13-day course, with 33 per cent still of the smokes six months later.

Overall, participants who received high frequency rTMS smoked less and were more likely to quit, with success rates four times that of the low frequency group and more than six times greater than the placebo group.

Dr Zangen’s team are now planning a much larger trial involving smokers in several countries, which is set to start in the next few months.

He told The Guardian: ‘It’s quite easy to quit for a few days, or even for a few weeks, but if we can help people quit for more than three months, then they are actually quite unlikely to relapse later on.’

Dr Zanger did reveal that he has a financial interest in the company which provided the Transcranial Magnetic Stimulation equipment used in the study.

Filed under smoking nicotine addiction prefrontal cortex insula transcranial magnetic stimulation Neuroscience 2013 neuroscience science

165 notes

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions
While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.
In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.
Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.
In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.
 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 
He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”
To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 
“The idea is really very simple,” Sussillo explained.
Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
Other researchers who are aware of the work but were not directly involved are commenting on the paper.
“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.
Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”
The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.
 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions

While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.

In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.

Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.

But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.

In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.

 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 

He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.

“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”

To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.

Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.

The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.

“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”

As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.

The entire process was grounded in the biological experiments.

The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?

Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.

The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.

It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 

As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 

“The idea is really very simple,” Sussillo explained.

Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.

The entire group of neurons being studied received sensory data about both the color and the motion of the dots.

The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.

The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?

What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.

If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.

“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”

Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”

Other researchers who are aware of the work but were not directly involved are commenting on the paper.

“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.

Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”

The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.

 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Filed under prefrontal cortex neural networks brain mapping neurons decision making neuroscience science

243 notes

Antidepressant drug induces a juvenile-like state in neurons of the prefrontal cortex

For long, brain development and maturation has been thought to be a one-way process, in which plasticity diminishes with age. The possibility that the adult brain can revert to a younger state and regain plasticity has not been considered, often. In a paper appearing on November 4 in the online open-access journal Molecular Brain, Dr. Tsuyoshi Miyakawa and his colleagues from Fujita Health University show that chronic administration of one of the most widely used antidepressants fluoxetine (FLX, which is also known by trade names like Prozac, Sarafem, and Fontex and is a selective serotonin reuptake inhibitor) can induce a juvenile-like state in specific types of neurons in the prefrontal cortex of adult mice.

In their study, FLX-treated adult mice showed reduced expression of parvalbumin and perineuronal nets, which are molecular markers for maturation and are expressed in a certain group of mature neurons in adults, and increased expression of an immature marker, which typically appears in developing juvenile brains, in the prefrontal cortex. These findings suggest the possibility that certain types of adult neurons in the prefrontal cortex can partially regain a youth-like state; the authors termed this as induced-youth or iYouth. These researchers as well as other groups had previously reported similar effects of FLX in the hippocampal dentate gyrus, basolateral amygdala, and visual cortex, which were associated with increased neural plasticity in certain types of neurons. This study is the first to report on “iYouth” in the prefrontal cortex, which is the brain region critically involved in functions such as working memory, decision-making, personality expression, and social behavior, as well as in psychiatric disorders related to deficits in these functions.

Network dysfunction in the prefrontal cortex and limbic system, including the hippocampus and amygdala, is known to be involved in the pathophysiology of depressive disorders. Reversion to a youth-like state may mediate some of the therapeutic effects of FLX by restoring neural plasticity in these regions. On the other hand, some non-preferable aspects of FLX-induced pseudo-youth may play a role in certain behavioral effects associated with FLX treatment, such as aggression, violence, and psychosis, which have recently received attention as adverse effects of FLX. Interestingly, expression of the same molecular markers of maturation, as discussed in this study, has been reported to be decreased in the prefrontal cortex of postmortem brains of patients with schizophrenia. This raises the possibility that some of FLX’s adverse effects may be attributable to iYouth in the same type of neurons in this region. Currently, basic knowledge on this is lacking, and there are several unanswered questions like: What are the molecular and cellular mechanisms underlying iYouth? What are the differences between actual youth and iYouth? Is iYouth good or bad? Future studies to answer these questions could potentially revolutionize the prevention and/or treatment of various neuropsychiatric disorders and aid in improving the quality of life for an aging population.

(Source: eurekalert.org)

Filed under antidepressants neurons prefrontal cortex fluoxetine neuroscience science

152 notes

Researcher Reveals the Brain Connections Underlying Accurate Introspection
The human mind is not only capable of cognition and registering experiences but also of being introspectively aware of these processes. Until now, scientists have not known if such introspection was a single skill or dependent on the object of reflection. Also unclear was whether the brain housed a single system for reflecting on experience or required multiple systems to support different types of introspection.
A new study by UC Santa Barbara graduate student Benjamin Baird and colleagues suggest that the ability to accurately reflect on perceptual experience and the ability to accurately reflect on memories were uncorrelated, suggesting that they are distinct introspective skills. The findings appear in the Journal of Neuroscience.
The researchers used classic perceptual decision and memory retrieval tasks in tandem with functional magnetic resonance imaging to determine connectivity to regions in the front tip of the brain, commonly referred to as the anterior prefrontal cortex. The study tested a person’s ability to reflect on his or her perception and memory and then examined how individual variation in each of these capacities was linked to the functional connections of the medial and lateral parts of the anterior prefrontal cortex.
"Our results suggest that metacognitive or introspective ability may not be a single thing," Baird said. "We actually find a behavioral dissociation between the two metacognitive abilities across people, which suggests that you can be good at reflecting on your memory but poor at reflecting on your perception, or vice versa."
The newly published research adds to the literature describing the role of the medial and lateral areas of the anterior prefrontal cortex in metacognition and suggests that specific subdivisions of this area may support specific types of introspection. The findings of Baird’s team demonstrate that the ability to accurately reflect on perception is associated with enhanced connectivity between the lateral region of the anterior prefrontal cortex and the anterior cingulate, a region involved in coding uncertainty and errors of performance.
In contrast, the ability to accurately reflect on memory is linked to enhanced connectivity between the medial anterior prefrontal cortex and two areas of the brain: the precuneus and the lateral parietal cortex, regions prior work has shown to be involved in coding information pertaining to memories.
The experiment assessed the metacognitive abilities of 60 participants at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, where Baird was a visiting researcher. The perceptual decision task consisted of visual displays with six circles of vertical alternating light and dark bars –– called Gabor gratings –– arranged around a focal point. Participants were asked to identify whether the first or second display featured one of the six areas with a slight tilt, not always an easy determination to make.
A classic in psychology literature, the memory retrieval task consisted of two parts. First, participants were shown a list of 145 words. They were then shown a second set of words and asked to distinguish those they had seen previously. After each stimulus in both the perceptual decision and the memory retrieval task, participants rated their confidence in the accuracy of their responses on a scale of 1 (low confidence) to 6 (high confidence).
"Part of the novelty of this study is that it is the first to examine how connections between different regions of the brain support metacognitive processes," Baird said. "Also, prior means of computing metacognitive accuracy have been shown to be confounded by all kinds of things, like how well you do the primary task or your inherent bias toward high or low confidence.
"Using these precise measures, we’re now beginning to drill down and see how different types of introspection are actually housed in the real human brain," Baird concluded. "So it’s pretty fascinating from that perspective."

Researcher Reveals the Brain Connections Underlying Accurate Introspection

The human mind is not only capable of cognition and registering experiences but also of being introspectively aware of these processes. Until now, scientists have not known if such introspection was a single skill or dependent on the object of reflection. Also unclear was whether the brain housed a single system for reflecting on experience or required multiple systems to support different types of introspection.

A new study by UC Santa Barbara graduate student Benjamin Baird and colleagues suggest that the ability to accurately reflect on perceptual experience and the ability to accurately reflect on memories were uncorrelated, suggesting that they are distinct introspective skills. The findings appear in the Journal of Neuroscience.

The researchers used classic perceptual decision and memory retrieval tasks in tandem with functional magnetic resonance imaging to determine connectivity to regions in the front tip of the brain, commonly referred to as the anterior prefrontal cortex. The study tested a person’s ability to reflect on his or her perception and memory and then examined how individual variation in each of these capacities was linked to the functional connections of the medial and lateral parts of the anterior prefrontal cortex.

"Our results suggest that metacognitive or introspective ability may not be a single thing," Baird said. "We actually find a behavioral dissociation between the two metacognitive abilities across people, which suggests that you can be good at reflecting on your memory but poor at reflecting on your perception, or vice versa."

The newly published research adds to the literature describing the role of the medial and lateral areas of the anterior prefrontal cortex in metacognition and suggests that specific subdivisions of this area may support specific types of introspection. The findings of Baird’s team demonstrate that the ability to accurately reflect on perception is associated with enhanced connectivity between the lateral region of the anterior prefrontal cortex and the anterior cingulate, a region involved in coding uncertainty and errors of performance.

In contrast, the ability to accurately reflect on memory is linked to enhanced connectivity between the medial anterior prefrontal cortex and two areas of the brain: the precuneus and the lateral parietal cortex, regions prior work has shown to be involved in coding information pertaining to memories.

The experiment assessed the metacognitive abilities of 60 participants at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, where Baird was a visiting researcher. The perceptual decision task consisted of visual displays with six circles of vertical alternating light and dark bars –– called Gabor gratings –– arranged around a focal point. Participants were asked to identify whether the first or second display featured one of the six areas with a slight tilt, not always an easy determination to make.

A classic in psychology literature, the memory retrieval task consisted of two parts. First, participants were shown a list of 145 words. They were then shown a second set of words and asked to distinguish those they had seen previously. After each stimulus in both the perceptual decision and the memory retrieval task, participants rated their confidence in the accuracy of their responses on a scale of 1 (low confidence) to 6 (high confidence).

"Part of the novelty of this study is that it is the first to examine how connections between different regions of the brain support metacognitive processes," Baird said. "Also, prior means of computing metacognitive accuracy have been shown to be confounded by all kinds of things, like how well you do the primary task or your inherent bias toward high or low confidence.

"Using these precise measures, we’re now beginning to drill down and see how different types of introspection are actually housed in the real human brain," Baird concluded. "So it’s pretty fascinating from that perspective."

Filed under prefrontal cortex brain mapping neuroimaging metacognition psychology neuroscience science

free counters