Neuroscience

Articles and news from the latest research reports.

109 notes

The brain’s response to sweets may indicate risk for development of alcoholism

Several human and animal studies have shown a relationship between a preference for highly sweet tastes and alcohol use disorders. Furthermore, the brain mechanisms of sweet-taste responses may share common neural pathways with responses to alcohol and other drugs. A new study using functional magnetic resonance imaging (fMRI) has found that recent drinking is related to the orbitofrontal-region brain response to an intensely sweet stimulus, a brain response that may serve as an important phenotype, or observable characteristic, of alcoholism risk.

Results will be published in the December 2013 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.

"It has long-been known that animals bred to prefer alcohol also drink considerably greater quantities of sweetened water than do animals without this selective breeding for alcohol preference," explained David A. Kareken, deputy director of the Indiana Alcohol Research Center, a professor in the department of neurology at Indiana University School of Medicine, and corresponding author for the study. "More recently, it has become clear that animals bred to prefer the artificial sweetener, saccharin, also drink more alcohol. Although the data in humans are somewhat more variable, some studies do show that alcoholics, or even non-alcoholics with a family history of alcoholism, have a preference for unusually sweet tastes. Thus, while the precise reasons remain unclear, there does seem to be significant evidence suggesting some link between the rewarding properties of both sweet tastes and alcohol."

Kareken added that this is the first study to examine the extent to which regions of the brain’s reward system, as they respond to an intensely sweet taste, are related to human drinking patterns.

Kareken and his colleagues recruited 16 (12 males, 4 females) right-handed, non-treatment seeking, healthy volunteers with a mean age of 26 years from the community. All participants underwent a taste test using a range of sucrose concentrations, and their blood oxygen dependent (BOLD) activation was measured during an fMRI scan while receiving small squirts of either water or an intensely sweet mixture of sugar in water. All were asked about their drinking patterns.

"Our study was designed to determine which brain areas responded to sweet taste – as compared to plain water – and the extent to which these brain responses were related to subjects’ binge-drinking patterns, the number of alcoholic drinks subjects consumed per day when drinking," explained Kareken.

"In addition to ‘activating’ the brain’s gustatory or taste circuits, the sugared water also activated key elements of what neuroscientists consider to be part of the brain’s reward system, including the ventral striatum, amygdala, and parts of the orbitofrontal cortex – the inferior frontal lobe surface just above the eyes – that respond to ingested rewards," Kareken said. "We refer to these as ‘primary’ rewards, being distinct from secondary rewards, like money, which can be used to obtain primary rewards."

What the researchers found was that the response to this intensely sweet taste in the left orbitofrontal area correlated significantly with subjects’ drinking patterns.

"Specifically, the trend was such that those who drank more alcohol on drinking days had stronger left orbitofrontal responses to the intensely sweet water," said Kareken. "Subjects’ subjectively rated liking of the sweetened water also contributed to this relationship, so that both the brain response itself, as well as liking of the sugared water, collectively correlated with drinking behavior."

While previous human and animal research has noted this association between preferences for both sweet tastes and alcohol intoxication, Kareken believes that this is the first study to examine the human brain mechanism behind this association.

"While much more research needs to be done to truly understand the commonalities between sweet-liking and alcoholism, and while alcoholism itself is likely the product of several mechanisms, our findings may implicate a particular brain region that is more generally involved in coding for the value of ‘primary’ rewards such as pleasures," he said. "In a more practical sense, the findings are compelling evidence that the brain response to an intensely sweet taste may be used in future research to test for differences in the reward circuits of those at risk for alcoholism. This may be particularly useful since alcohol itself is not an easy drug to work with in this kind of human imaging, and since alcohol exposure is not ethically appropriate for use in all at-risk subjects, or in subjects trying to abstain from drinking."

(Source: eurekalert.org)

Filed under alcoholism brain response sweet taste reward system orbitofrontal cortex neuroscience science

96 notes

ucsdhealthsciences:

Women Suffer Higher Rates of Decline in Aging and Alzheimer’s Disease
The rates of regional brain loss and cognitive decline caused by aging and the early stages of Alzheimer’s disease (AD) are higher for women and for people with a key genetic risk factor for AD, say researchers at the University of California, San Diego School of Medicine in a study published online July 4 in the American Journal of Neuroradiology.
The linkage between APOE ε4 – which codes for a protein involved in binding lipids or fats in the lymphatic and circulatory systems – was already documented as the strongest known genetic risk factor for sporadic AD, the most common form of the disease. But the connection between the sex of a person and AD has been less-well recognized, according to the UC San Diego scientists.
“APOE ε4 has been known to lower the age of onset and increase the risk of getting the disease,” said the study’s first author Dominic Holland, PhD, a researcher in the Department of Neurosciences at UC San Diego School of Medicine. “Previously we showed that the lower the age, the higher the rates of decline in AD. So it was important to examine the differential effects of age and APOE ε4 on rates of decline, and to do this across the diagnostic spectrum for multiple clinical measures and brain regions, which had not been done before.”
The scientists evaluated 688 men and women over the age of 65 participating in the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal, multi-institution study to track the progression of AD and its effects upon the structures and functions of the brain. They found that women with mild cognitive impairment (a condition precursory to AD diagnosis) experienced higher rates of cognitive decline than men; and that all women, regardless of whether or not they showed signs of dementia, experienced greater regional brain loss over time than did men. The magnitude of the sex effect was as large as that of the APOE ε4 allele.
“Assuming larger population-based samples reflect the higher rates of decline for women than men, the question becomes what is so different about women,” said Holland. Hormonal differences or change seems an obvious place to start, but Holland said this is largely unknown territory – at least regarding AD.
“Another important finding of this study is that men and women did not differ in the level of biomarkers of Alzheimer’s disease pathology,” said co-author Linda McEvoy, PhD, an associate professor in the UCSD Department of Radiology. “This suggests that brain volume loss in women may also be caused by factors other than Alzheimer’s disease, or that in women, these pathologies are more toxic. We clearly need more research on how an individual’s sex affects AD pathogenesis.”
Holland acknowledged that the paper likely raises more questions than it answers. “There are many factors that may affect the sex differences we observed, such as whether the women in this study may have had higher rates of diabetes or insulin resistance than the men. We also do not know how the use of hormone replacement therapy, reproductive history or years since menopause may have affected these differences. All these issues need to be examined. There is no prevailing theory.”
More here

ucsdhealthsciences:

Women Suffer Higher Rates of Decline in Aging and Alzheimer’s Disease

The rates of regional brain loss and cognitive decline caused by aging and the early stages of Alzheimer’s disease (AD) are higher for women and for people with a key genetic risk factor for AD, say researchers at the University of California, San Diego School of Medicine in a study published online July 4 in the American Journal of Neuroradiology.

The linkage between APOE ε4 – which codes for a protein involved in binding lipids or fats in the lymphatic and circulatory systems – was already documented as the strongest known genetic risk factor for sporadic AD, the most common form of the disease. But the connection between the sex of a person and AD has been less-well recognized, according to the UC San Diego scientists.

“APOE ε4 has been known to lower the age of onset and increase the risk of getting the disease,” said the study’s first author Dominic Holland, PhD, a researcher in the Department of Neurosciences at UC San Diego School of Medicine. “Previously we showed that the lower the age, the higher the rates of decline in AD. So it was important to examine the differential effects of age and APOE ε4 on rates of decline, and to do this across the diagnostic spectrum for multiple clinical measures and brain regions, which had not been done before.”

The scientists evaluated 688 men and women over the age of 65 participating in the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal, multi-institution study to track the progression of AD and its effects upon the structures and functions of the brain. They found that women with mild cognitive impairment (a condition precursory to AD diagnosis) experienced higher rates of cognitive decline than men; and that all women, regardless of whether or not they showed signs of dementia, experienced greater regional brain loss over time than did men. The magnitude of the sex effect was as large as that of the APOE ε4 allele.

“Assuming larger population-based samples reflect the higher rates of decline for women than men, the question becomes what is so different about women,” said Holland. Hormonal differences or change seems an obvious place to start, but Holland said this is largely unknown territory – at least regarding AD.

“Another important finding of this study is that men and women did not differ in the level of biomarkers of Alzheimer’s disease pathology,” said co-author Linda McEvoy, PhD, an associate professor in the UCSD Department of Radiology. “This suggests that brain volume loss in women may also be caused by factors other than Alzheimer’s disease, or that in women, these pathologies are more toxic. We clearly need more research on how an individual’s sex affects AD pathogenesis.”

Holland acknowledged that the paper likely raises more questions than it answers. “There are many factors that may affect the sex differences we observed, such as whether the women in this study may have had higher rates of diabetes or insulin resistance than the men. We also do not know how the use of hormone replacement therapy, reproductive history or years since menopause may have affected these differences. All these issues need to be examined. There is no prevailing theory.”

More here

57 notes

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms
Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.
One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.
“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”
Size doesn’t matterClinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.
“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”
Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.
Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.
Robotic arm as good as traditional therapyIn the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.
“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.
For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.
“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”
Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.
“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms

Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.

One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.

“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”

Size doesn’t matter
Clinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.

“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”

Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.

Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.

Robotic arm as good as traditional therapy
In the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.

“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.

For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.

“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”

Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.

“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Filed under stroke stroke survivors rehabilitation robotic arm robotics neuroscience science

104 notes

One More Homo Species?

A recent 3D-comparative analysis confirms the status of Homo floresiensis as a fossil human species

image

Ever since the discovery of the remains in 2003, scientists have been debating whether Homo floresiensis represents a distinct Homo species, possibly originating from a dwarfed island Homo erectus population, or a pathological modern human. The small size of its brain has been argued to result from a number of diseases, most importantly from the condition known as microcephaly.

Based on the analysis of 3-D landmark data from skull surfaces, scientists from Stony Brook University New York, the Senckenberg Center for Human Evolution and Palaeoenvironment, Eberhard-Karls Universität Tübingen, and the University of Minnesota provide compelling support for the hypothesis that Homo floresiensis was a distinct Homo species.

The study, titled “Homo floresiensis contextualized: a geometric morphometric comparative analysis of fossil and pathological human samples,” is published in the July 10 edition of PLOS ONE.

The ancestry of the Homo floresiensis remains is much disputed.
The critical questions are: Did it represent an extinct hominin species? Could it be a Homo erectus population, whose small stature was caused by island dwarfism?

Or, did the LB1 skull belong to a modern human with a disorder that resulted in an abnormally small brain and skull? Proposed possible explanations include microcephaly, Laron Syndrome or endemic hypothyroidism (“cretinism”).

The scientists applied the powerful methods of 3-D geometric morphometrics to compare the shape of the LB1 cranium (the skull minus the lower jaw) to many fossil humans, as well as a large sample of modern human crania suffering from microcephaly and other pathological conditions. Geometric morphometrics methods use 3D coordinates of cranial surface anatomical landmarks, computer imaging, and statistics to achieve a detailed analysis of shape.

This was the most comprehensive study to date to simultaneously evaluate the two competing hypotheses about the status of Homo floresiensis.

The study found that the LB1 cranium shows greater affinities to the fossil human sample than it does to pathological modern humans. Although some superficial similarities were found between fossil, LB1, and pathological modern human crania, additional features linked LB1exclusively with fossil Homo. The team could therefore refute the hypothesis of pathology.

“Our findings provide the most comprehensive evidence to date linking the Homo floresiensis skull with extinct fossil human species rather than with pathological modern humans. Our study therefore refutes the hypothesis that this specimen represents a modern human with a pathological condition, such as microcephaly,” stated the scientists.

(Source: commcgi.cc.stonybrook.edu)

Filed under homo floresiensis hominin species geometric morphometrics microcephaly evolution science

154 notes

A fundamental problem for brain mapping
Recent findings force scientists to rethink the rules of neuroimaging 
Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.
Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong “It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”
Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.
“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”
By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.
Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.
Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.
Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.
“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”
Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.
Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”
The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”
A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.
“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.
For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.
“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”
Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.
For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.
“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”
The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.
“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”
Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”
“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”
Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.
The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.
Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.
Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”
(Image: Saad Faruque, Flickr)

A fundamental problem for brain mapping

Recent findings force scientists to rethink the rules of neuroimaging

Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.

Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong

“It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”

Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.

“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”

By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.

Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.

Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.

Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.

“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”

Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.

Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”

The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”

A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.

“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.

For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.

“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”

Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.

For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.

“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”

The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.

“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”

Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”

“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”

Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.

The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.

Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.

Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”

(Image: Saad Faruque, Flickr)

Filed under brain mapping neuroimaging brain activity cognitive subtraction neuroscience science

103 notes

Researchers create the inner ear from stem cells, opening potential for new treatments
Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.
A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.
Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.
The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.
"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.
"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."
Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.
Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.
However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Researchers create the inner ear from stem cells, opening potential for new treatments

Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.

A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.

Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.

The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.

"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.

"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."

Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.

Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.

However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Filed under stem cells inner ear hair cells embryonic development hearing loss neuroscience science

45 notes

Double-barreled attack on obesity in no way a no-brainer

In the constant cross talk between our brain and our gut, two gut hormones are already known to tell the brain when we have had enough to eat. New research suggests that boosting levels of these hormones simultaneously may be an effective new weapon in the fight against obesity.

Dr Shu Lin, Dr Yan-Chuan Shi and Professor Herbert Herzog, from Sydney’s Garvan Institute of Medical Research, have shown that when mice are injected with PYY3-36 and PP, they eat less, gain less fat, and tend not to develop insulin-resistance, a precursor to diabetes. At the same time, the researchers have shown that the hormones stimulate different nerve pathways, ultimately, however, affecting complementary brain regions. Their findings are now published online in the journal Obesity.

While the double-barreled approach may seem like a no-brainer, the strongly enhanced effect seen was by no means inevitable. In the complex world of neuroscience, two plus two does not always make four.

Drug companies are in the process of conducting pre-clinical trials to examine the separate effects of boosting the hormones PYY3-36 and PP. Until now, there is no research to indicate the detailed molecular interactions that might occur when they are boosted in tandem.

When used together, the hormones independently, yet with combined force, reduce the amount of neuropeptide Y (NPY) produced by the brain, a powerful neurotransmitter that affects a variety of things including appetite, mood, heart rate, temperature and energy levels.

Each hormone also communicates with a different part of the arcuate nucleus in the hypothalamus, a region of the brain where signals can cross the normally impermeable blood / brain barrier. The stimulated regions then produce other neuronal signals deep within the hypothalamus, bringing about a powerful combined effect.

“There are many factors that influence appetite control – and we now realise that there won’t be a single molecular target, or a single drug, that will be effective,” said Dr Yan-Chuan Shi.

“It will be important for drug companies to try different combinations of targets, to see which combinations are most potent, and at the same time have no side effects, or at least minimal side effects.”

“At the moment, the only effective tool against obesity is surgery. Drug companies have so far failed to produce an effective drug without unacceptable side effects, such as mood disorders, nausea or cardiovascular problems.”

(Source: garvan.org.au)

Filed under obesity hormones neuropeptide Y hypothalamic nuclei hypothalamus neuroscience science

58 notes

Newly Identified Bone Marrow Stem Cells Reveal Markers for ALS

Amyotrophic Lateral Sclerosis (ALS) is a devastating motor neuron disease that rapidly atrophies the muscles, leading to complete paralysis. Despite its high profile — established when it afflicted the New York Yankees’ Lou Gehrig — ALS remains a disease that scientists are unable to predict, prevent, or cure.

image

Although several genetic ALS mutations have been identified, they only apply to a small number of cases. The ongoing challenge is to identify the mechanisms behind the non-genetic form of the disease and draw useful comparisons with the genetic forms.

Now, using samples of stem cells derived from the bone marrow of non-genetic ALS patients, Prof. Miguel Weil of Tel Aviv University’s Laboratory for Neurodegenerative Diseases and Personalized Medicine in the Department of Cell Research and Immunology and his team of researchers have uncovered four different biomarkers that characterize the non-genetic form of the disease. Each sample shows similar biological abnormalities to four specific genes, and further research could reveal additional commonalities. “Because these genes and their functions are already known, they give us a specific direction for research into non-genetic ALS diagnostics and therapeutics,” Prof. Weil says. His initial findings were reported in the journal Disease Markers.

Giving in to stress

To hunt for these biomarkers, Prof. Weil and his colleagues turned to samples of bone marrow collected from ALS patients. Though more difficult to collect than blood, bone marrow’s stem cells are easy to isolate and grow in a consistent manner. In the lab, he used these cells as cellular models for the disease. He ultimately discovered that cells from different ALS patients shared the same abnormal characteristics of four different genes that may act as biomarkers of the disease. And because the characteristics appear in tissues that are related to ALS — including in muscle, brain, and spinal cord tissues in mouse models of genetic ALS — they may well be connected to the degenerative process of the disease in humans, he believes.

Searching for the biological significance of these abnormalities, Prof. Weil put the cells under stress, applying toxins to induce the cells’ defense mechanisms. Healthy cells will try to fight off threats and often prove quite resilient, but ALS cells were found to be overwhelmingly sensitive to stress, with the vast majority choosing to die rather than fight. Because this is such an ingrained response, it can be used as a feature for drug screening for the disease, he adds.

The hunt for therapeutics

Whether these biomarkers are a cause or consequence of ALS is still unknown. However, this finding remains an important step towards uncovering the mechanisms of the disease. Because these genes have already been identified, it gives scientists a clear direction for future research. In addition, these biomarkers could lead to earlier and more accurate diagnostics.

Next, Prof. Weil plans to use his lab’s high-throughput screening facility — which can test thousands of compounds’ effects on diseased cells every day — to search for drug candidates with the potential to affect the abnormal expression of these genes or the stress response of ALS cells. A compound that has an impact on these indicators of ALS could be meaningful for treating the disease, he says.

Prof. Weil is the director of the new Cell Screening Facility for Personalized Medicine at TAU. The facility is dedicated to finding potential drugs for rare and Jewish hereditary diseases.

(Source: aftau.org)

Filed under ALS motor neuron disease neurodegenerative diseases genetics medicine biomarkers science

162 notes

Irregular bed times curb young kids’ brain power
Given the importance of early childhood development on subsequent health, there may be knock-on effects across the life course, suggest the authors.
The authors looked at whether bedtimes in early childhood were related to brain power in more than 11,000 seven year olds, all of whom were part of the UK Millennium Cohort Study (MCS).
MCS is a nationally representative long term study of UK children born between September 2000 and January 2002, and the research drew on regular surveys and home visits made when the children were 3, 5, and 7, to find out about family routines, including bedtimes.
The authors wanted to know whether the time a child went to bed, and the consistency of bed-times, had any impact on intellectual performance, measured by validated test scores for reading, maths, and spatial awareness.
And they wanted to know if the effects were cumulative and/or whether any particular periods during early childhood were more critical than others.
Irregular bedtimes were most common at the age of 3, when around one in five children went to bed at varying times. By the age of 7, more than half the children went to bed regularly between 7.30 and 8.30 pm.
Children whose bedtimes were irregular or who went to bed after 9 pm came from more socially disadvantaged backgrounds, the findings showed.
When they were 7, girls who had irregular bedtimes had lower scores on all three aspects of intellect assessed, after taking account of other potentially influential factors, than children with regular bedtimes. But this was not the case in 7 year old boys.
Irregular bedtimes by the age of 5 were not associated with poorer brain power in girls or boys at the age of 7. But irregular bedtimes at 3 years of age were associated with lower scores in reading, maths, and spatial awareness in both boys and girls, suggesting that around the age of 3 could be a sensitive period for cognitive development.
The impact of irregular bedtimes seemed to be cumulative.
Girls who had never had regular bedtimes at ages 3, 5, and 7 had significantly lower reading, maths and spatial awareness scores than girls who had had consistent bedtimes. The impact was the same in boys, but for any two of the three time points.
The authors point out that irregular bedtimes could disrupt natural body rhythms and cause sleep deprivation, so undermining the plasticity of the brain and the ability to acquire and retain information.
"Sleep is the price we pay for plasticity on the prior day and the investment needed to allow learning fresh the next day," they write. And they add: "Early child development has profound influences on health and wellbeing across the life course. Therefore, reduced or disrupted sleep, especially if it occurs at key times in development, could have important impacts on health throughout life."

Irregular bed times curb young kids’ brain power

Given the importance of early childhood development on subsequent health, there may be knock-on effects across the life course, suggest the authors.

The authors looked at whether bedtimes in early childhood were related to brain power in more than 11,000 seven year olds, all of whom were part of the UK Millennium Cohort Study (MCS).

MCS is a nationally representative long term study of UK children born between September 2000 and January 2002, and the research drew on regular surveys and home visits made when the children were 3, 5, and 7, to find out about family routines, including bedtimes.

The authors wanted to know whether the time a child went to bed, and the consistency of bed-times, had any impact on intellectual performance, measured by validated test scores for reading, maths, and spatial awareness.

And they wanted to know if the effects were cumulative and/or whether any particular periods during early childhood were more critical than others.

Irregular bedtimes were most common at the age of 3, when around one in five children went to bed at varying times. By the age of 7, more than half the children went to bed regularly between 7.30 and 8.30 pm.

Children whose bedtimes were irregular or who went to bed after 9 pm came from more socially disadvantaged backgrounds, the findings showed.

When they were 7, girls who had irregular bedtimes had lower scores on all three aspects of intellect assessed, after taking account of other potentially influential factors, than children with regular bedtimes. But this was not the case in 7 year old boys.

Irregular bedtimes by the age of 5 were not associated with poorer brain power in girls or boys at the age of 7. But irregular bedtimes at 3 years of age were associated with lower scores in reading, maths, and spatial awareness in both boys and girls, suggesting that around the age of 3 could be a sensitive period for cognitive development.

The impact of irregular bedtimes seemed to be cumulative.

Girls who had never had regular bedtimes at ages 3, 5, and 7 had significantly lower reading, maths and spatial awareness scores than girls who had had consistent bedtimes. The impact was the same in boys, but for any two of the three time points.

The authors point out that irregular bedtimes could disrupt natural body rhythms and cause sleep deprivation, so undermining the plasticity of the brain and the ability to acquire and retain information.

"Sleep is the price we pay for plasticity on the prior day and the investment needed to allow learning fresh the next day," they write. And they add: "Early child development has profound influences on health and wellbeing across the life course. Therefore, reduced or disrupted sleep, especially if it occurs at key times in development, could have important impacts on health throughout life."

Filed under child development cognitive development irregular bedtimes performance childhood neuroscience science

79 notes

Study identifies brain circuits involved in learning and decision making

Finding has implications for alcoholism and other patterns of addictive behavior

Research from the National Institutes of Health has identified neural circuits in mice that are involved in the ability to learn and alter behaviors. The findings help to explain the brain processes that govern choice and the ability to adapt behavior based on the end results.

Researchers think this might provide insight into patterns of compulsive behavior such as alcoholism and other addictions.

“Much remains to be understood about exactly how the brain strikes the balance between learning a behavioral response that is consistently rewarded, versus retaining the flexibility to switch to a new, better response,” said Kenneth R. Warren, Ph.D., acting director of the National Institute on Alcohol Abuse and Alcoholism. “These findings give new insight into the process and how it can go awry.”

The study, published online in Nature Neuroscience, indicates that specific circuits in the forebrain play a critical role in choice and adaptive learning.

Like other addictions, alcoholism is a disease in which voluntary control of behavior progressively diminishes and unwanted actions eventually become compulsive. It is thought that the normal brain processes involved in completing everyday activities become redirected toward finding and abusing alcohol.

The research, conducted by investigators from NIAAA, with support from the National Institute of Mental Health and the University of Cambridge, England, used a variety of approaches to study choice.

Researchers used a simple choice task in which mice viewed images on a computer touchscreen and learned to touch a specific image with their nose to get a food reward. Using various techniques to visualize and record neural activity, researchers found that as the mice learned to consistently make a choice, the brain’s dorsal striatum was activated. The dorsal striatum is thought to play an important role in motivation, decision-making, and reward.

Conversely, when the mice later had to shift to a new choice to receive a reward, the dorsal striatum quieted while regions in the prefrontal cortex, an area involved in decision-making and complex cognitive processes, became active.

Building upon these findings, the authors next deleted or pharmacologically blocked a component of nerve cells which normally binds the neurochemical glutamate (specifically, the GluN2B subunit of the NMDA receptor) within two different areas of the brain, the striatum and the frontal cortex. Previous studies have shown that GluN2B plays a role in memory, spatial reference, and attention. Researchers found that making dorsal striatal GluN2B inactive markedly slowed learning, while shutting down GluN2B in the prefrontal cortex made the mice less able to relearn the touchscreen reward task after the reward image was changed.

“These data add to what we understand about the neural control of behavioral flexibility and striatal learning by identifying GluN2B as a critical molecular substrate to both processes,” said the study’s senior author, Andrew Holmes, Ph.D., Laboratory Chief and Principal Investigator of the NIAAA Laboratory of Behavioral and Genomic Neuroscience.

“This is particularly intriguing for future studies because NMDA receptors are a major target for alcohol and contribute to important features of alcoholism, such as withdrawal. These new findings suggest that GluN2B in corticostriatal circuits may also play a key role in driving the transition from controlled drinking to compulsive abuse that characterizes alcoholism.”

(Source: niaaa.nih.gov)

Filed under addiction alcoholism prefrontal cortex NMDA receptors neural circuits learning neuroscience science

free counters