Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroimaging

83 notes

Blasts May Cause Brain Injury Even Without Symptoms

Veterans exposed to explosions who do not report symptoms of traumatic brain injury (TBI) may still have damage to the brain’s white matter comparable to veterans with TBI, according to researchers at Duke Medicine and the U.S. Department of Veterans Affairs.

The findings, published in the Journal of Head Trauma Rehabilitation on March 3, 2014, suggest that a lack of clear TBI symptoms following an explosion may not accurately reflect the extent of brain injury.

Veterans of recent military conflicts in Iraq and Afghanistan often have a history of exposure to explosive forces from bombs, grenades and other devices, although relatively little is known about whether this injures the brain. However, evidence is building – particularly among professional athletes – that subconcussive events have an effect on the brain.

"Similar to sports injuries, people near an explosion assume that if they don’t have clear symptoms – losing consciousness, blurred vision, headaches – they haven’t had injury to the brain,” said senior author Rajendra A. Morey, M.D., associate professor of psychiatry and behavioral sciences at Duke University School of Medicine and a psychiatrist at the Durham Veterans Affairs Medical Center. “Our findings are important because they’re showing that even if you don’t have symptoms, there may still be damage.”

Researchers in the Mid-Atlantic Mental Illness Research, Education and Clinical Center at the W.G. (Bill) Hefner Veterans Affairs Medical Center in Salisbury, N.C., evaluated 45 U.S. veterans who volunteered to participate in the study. The veterans, who served since September 2001, were split into three groups: veterans with a history of blast exposure with symptoms of TBI; veterans with a history of blast exposure without symptoms of TBI; and veterans without blast exposure. The study focused on veterans with primary blast exposure, or blast exposure without external injuries, and did not include those with brain injury from direct hits to the head.

To measure injury to the brain, the researchers used a type of MRI called Diffusion Tensor Imaging (DTI). DTI can detect injury to the brain’s white matter by measuring the flow of fluid in the brain. In healthy white matter, fluid moves in a directional manner, suggesting that the white matter fibers are intact. Injured fibers allow the fluid to diffuse.

White matter is the connective wiring that links different areas of the brain. Since most cognitive processes involve multiple parts of the brain working together, injury to white matter can impair the brain’s communication network and may result in cognitive problems.

Both groups of veterans who were near an explosion, regardless of whether they had TBI symptoms, showed a significant amount of injury compared to the veterans not exposed to a blast. The injury was not isolated to one area of the brain, and each individual had a different pattern of injury.

Using neuropsychological testing to assess cognitive performance, the researchers found a relationship between the amount of white matter injury and changes in reaction time and the ability to switch between mental tasks. However, brain injury was not linked to performance on other cognitive tests, including decision-making and organization.

“We expected the group that reported few symptoms at the time of primary blast exposure to be similar to the group without exposure. It was a surprise to find relatively similar DTI changes in both groups exposed to primary blast,” said Katherine H. Taber, Ph.D., a research health scientist at the W.G. (Bill) Hefner Veterans Affairs Medical Center and the study’s lead author. “We are not sure whether this indicates differences among individuals in symptoms-reporting or subconcussive effects of primary blast. It is clear there is more we need to know about the functional consequences of blast exposures.”

Given the study’s findings, the researchers said clinicians treating veterans should take into consideration a person’s exposure to explosive forces, even among those who did not initially show symptoms of TBI. In the future, they may use brain imaging to support clinical tests.

“Imaging could potentially augment the existing approaches that clinicians use to evaluate brain injury by looking below the surface for TBI pathology,” Morey said.

The researchers noted that the results are preliminary, and should be replicated in a larger study.

(Source: dukehealth.org)

Filed under brain injury TBI diffusion tensor imaging white matter neuroimaging neuroscience science

317 notes

Brain development provides insights into adolescent depression



A new study led by the University of Melbourne and Orygen Youth Health Research Centre is the first to discover that the brain develops differently in adolescents who experience depression. These brain changes also represent possible risk factors for developing depression during teenage years.



Lead research Professor Nick Allen from the Melbourne School of Psychological Sciences said, “It is well known that the brain continues to change and remodel itself during adolescence as part of healthy development.”
“In this study, we found that the pattern of development (such as changes in brain structure between ages twelve to sixteen) in several key brain regions differed between depressed and non-depressed adolescents,” Professor Allen said.
The brain regions involved include areas associated with the experience and regulation of emotion, as well as areas associated with learning and memory. 


“The findings are an important breakthrough for exploring possible causes of depression in adolescence. They also suggest that both prevention and treatment for depression (even for early signs and symptoms of depression) in adolescence is essential, especially targeting those in the early years of adolescence aged twelve to sixteen,” he said.
“We also observed some differences between males and females. For males, less growth in an area of the brain involved in processing threat and other unexpected events that is a critical part of the brain’s fear circuitry, was associated with depression. On the other hand, for females, greater growth of this area was found to be associated with depression.” 


“This is important information because depression becomes much more common amongst girls during adolescence, and these findings tell us about some of the neurobiological factors that might play a role in this gender difference,” he said.
Professor Allen says adolescence is a period during the lifespan where risk for developing depression dramatically increases.
The study examined eighty-six adolescents (41 female) with no history of depressive disorders before age 12 by using a Magnetic Resonance Imaging (MRI) scanner, which allowed researchers to measure the volume of particular brain regions of interest. 

Participants underwent an MRI scan first at age twelve and again at age sixteen, when rates of depression were beginning to increase. 

Researchers also conducted detailed interviews with each of the participants at four different time points between age twelve and age eighteen. Thirty participants experienced a first episode of a depressive disorder during the follow-up period.
These findings have recently been published in the American Journal of Psychiatry.

Brain development provides insights into adolescent depression

A new study led by the University of Melbourne and Orygen Youth Health Research Centre is the first to discover that the brain develops differently in adolescents who experience depression. These brain changes also represent possible risk factors for developing depression during teenage years.

Lead research Professor Nick Allen from the Melbourne School of Psychological Sciences said, “It is well known that the brain continues to change and remodel itself during adolescence as part of healthy development.”

“In this study, we found that the pattern of development (such as changes in brain structure between ages twelve to sixteen) in several key brain regions differed between depressed and non-depressed adolescents,” Professor Allen said.

The brain regions involved include areas associated with the experience and regulation of emotion, as well as areas associated with learning and memory. 



“The findings are an important breakthrough for exploring possible causes of depression in adolescence. They also suggest that both prevention and treatment for depression (even for early signs and symptoms of depression) in adolescence is essential, especially targeting those in the early years of adolescence aged twelve to sixteen,” he said.

“We also observed some differences between males and females. For males, less growth in an area of the brain involved in processing threat and other unexpected events that is a critical part of the brain’s fear circuitry, was associated with depression. On the other hand, for females, greater growth of this area was found to be associated with depression.” 



“This is important information because depression becomes much more common amongst girls during adolescence, and these findings tell us about some of the neurobiological factors that might play a role in this gender difference,” he said.

Professor Allen says adolescence is a period during the lifespan where risk for developing depression dramatically increases.

The study examined eighty-six adolescents (41 female) with no history of depressive disorders before age 12 by using a Magnetic Resonance Imaging (MRI) scanner, which allowed researchers to measure the volume of particular brain regions of interest. 

Participants underwent an MRI scan first at age twelve and again at age sixteen, when rates of depression were beginning to increase. 

Researchers also conducted detailed interviews with each of the participants at four different time points between age twelve and age eighteen. Thirty participants experienced a first episode of a depressive disorder during the follow-up period.

These findings have recently been published in the American Journal of Psychiatry.

Filed under brain development depression adolescents neuroimaging psychology neuroscience science

120 notes

Study first to offer detailed map of mouse’s cerebral cortex
The mammalian cerebral cortex, long thought to be a dense single interrelated tangle of neural networks, actually has a “logical” underlying organizational principle, according to a study appearing in the journal Cell.
Researchers have identified eight distinct neural subnetworks that together form the connectivity infrastructure of the mammalian cortex — the part of the brain involved in higher-order functions such as cognition, emotion and consciousness.
“This study is the first comprehensive mapping of the most developed region of the mammalian brain: the cerebral cortex. The cortex is highly complex and made up of many densely interconnected structures, but when you strip it down, is organized into a small number of subnetworks,” said senior author Hongwei Dong of the USC Institute for Neuroimaging and Informatics (INI).
The cerebral cortex is the outermost layer of neural tissue in the brain and is one of the most extensively studied brain structures in the field of neuroscience. However, before this study, its underlying organizational principle was still largely unclear.
“Think about it: The brain is built for logic, so it’s organization must be logical. The brain’s architectural organization is arranged such that all of its substructures most efficiently work in conjunction to produce appropriate behaviors,” said Dong, associate professor of neurology at the Keck School of Medicine of USC. “We want to find the code to how the brain is structurally organized.”
The study is also a reminder that while there is more data than ever, the quality and reliability of information still matters. In contrast to past patchwork attempts, Dong and his team undertook an effort to directly develop a whole-brain mouse atlas of brain pathways. Across the cortex, they injected fluorescent molecules. These molecules were then transported along the brain’s “cellular highways” — the neuronal pathways — and meticulously tracked using a high-resolution microscope.
The uniformity and completeness of the scientists’ effort across the entire cortex provided for a searchable image database of cortical connections, which the researchers are making open-access and publicly available.
It also allowed them to reliably see patterns: the seemingly inscrutable mass of connections in the cerebral cortex is highly organized, consisting of eight distinct subnetworks that are relatively segregated.
“The systematic and comprehensive manner in which the data were collected lent itself to a detailed analysis through which these subnetworks emerged,” explained co-lead author Houri Hintiryan of the USC Laboratory of Neuro Imaging.
So that scientists around the world may continue to look for fundamental structural insights, the full, interactive imaging dataset is viewable at Mouse Connectome Project, providing a resource for researchers interested in studying the anatomy and function of cortical networks throughout the brain.
“It really is quite tedious,” Dong said of collecting the data, “and labor-intensive, and it requires highly specialized skills and technology. But think of the Human Genome Project and how much it accelerated the process of discovery and the whole field when infrastructures existed for people to share and compare. That was our motivation.”
How these subnetworks interact will provide a crucial baseline from which to better understand diseases of “disconnection” such as autism and Alzheimer’s disease, in which the manifestations of symptoms are potentially a result of disordered or damaged connections.
The researchers’ map of the mouse cerebral cortex can be compared to data on disease-affected brains, brains in development and genetic information. It will also offer necessary context for humans, who behaved just like other mammals only a few thousand years ago and who still share most underlying basic behavioral characteristics such as hunger and pain.
“The fundamental logic of mammalian brains is the same, particularly when it comes to basic behaviors such as eating, sleeping and social behaviors” said Dong, who noted that similar studies in humans have thus far not gotten to the cellular level. “There are lots of organizing principles to brain structures that we are just beginning to understand.”
The researchers identified the brain subnetworks based on their high degree of interconnectivity — though relatively independent, several structures provide communication routes through which the subnetworks interact. Combined with behavioral data from past research and information about subcortical targets, these interconnections imply remarkable functional significance for the subnetworks.
Four of the eight identified subnetworks in the mouse cortex relate to sensation and movement of the body — what the researchers dub somatic sensorimotor. In particular, the researchers identified separate subnetworks for movements in the face, upper limbs, lower limbs and trunk, and whiskers. Together, these networks facilitate motor behaviors such as eating and drinking, reaching and grabbing, locomotion and exploration of the environment.
Two other subnetworks are comprised of structures located along the midline of the cerebral cortex. These medial subnetworks seem devoted to the integration of visual, auditory and somatic sensory information, according to the study. Several other structures located along the side of the brain form two lateral subnetworks, one of which potentially serves to regulate the internal status of the body (i.e., taste, hunger, visceral information) and the other as a “mega-integration” subnetwork that allows the interaction of information from nearly the entire cortex.

Study first to offer detailed map of mouse’s cerebral cortex

The mammalian cerebral cortex, long thought to be a dense single interrelated tangle of neural networks, actually has a “logical” underlying organizational principle, according to a study appearing in the journal Cell.

Researchers have identified eight distinct neural subnetworks that together form the connectivity infrastructure of the mammalian cortex — the part of the brain involved in higher-order functions such as cognition, emotion and consciousness.

“This study is the first comprehensive mapping of the most developed region of the mammalian brain: the cerebral cortex. The cortex is highly complex and made up of many densely interconnected structures, but when you strip it down, is organized into a small number of subnetworks,” said senior author Hongwei Dong of the USC Institute for Neuroimaging and Informatics (INI).

The cerebral cortex is the outermost layer of neural tissue in the brain and is one of the most extensively studied brain structures in the field of neuroscience. However, before this study, its underlying organizational principle was still largely unclear.

“Think about it: The brain is built for logic, so it’s organization must be logical. The brain’s architectural organization is arranged such that all of its substructures most efficiently work in conjunction to produce appropriate behaviors,” said Dong, associate professor of neurology at the Keck School of Medicine of USC. “We want to find the code to how the brain is structurally organized.”

The study is also a reminder that while there is more data than ever, the quality and reliability of information still matters. In contrast to past patchwork attempts, Dong and his team undertook an effort to directly develop a whole-brain mouse atlas of brain pathways. Across the cortex, they injected fluorescent molecules. These molecules were then transported along the brain’s “cellular highways” — the neuronal pathways — and meticulously tracked using a high-resolution microscope.

The uniformity and completeness of the scientists’ effort across the entire cortex provided for a searchable image database of cortical connections, which the researchers are making open-access and publicly available.

It also allowed them to reliably see patterns: the seemingly inscrutable mass of connections in the cerebral cortex is highly organized, consisting of eight distinct subnetworks that are relatively segregated.

“The systematic and comprehensive manner in which the data were collected lent itself to a detailed analysis through which these subnetworks emerged,” explained co-lead author Houri Hintiryan of the USC Laboratory of Neuro Imaging.

So that scientists around the world may continue to look for fundamental structural insights, the full, interactive imaging dataset is viewable at Mouse Connectome Project, providing a resource for researchers interested in studying the anatomy and function of cortical networks throughout the brain.

“It really is quite tedious,” Dong said of collecting the data, “and labor-intensive, and it requires highly specialized skills and technology. But think of the Human Genome Project and how much it accelerated the process of discovery and the whole field when infrastructures existed for people to share and compare. That was our motivation.”

How these subnetworks interact will provide a crucial baseline from which to better understand diseases of “disconnection” such as autism and Alzheimer’s disease, in which the manifestations of symptoms are potentially a result of disordered or damaged connections.

The researchers’ map of the mouse cerebral cortex can be compared to data on disease-affected brains, brains in development and genetic information. It will also offer necessary context for humans, who behaved just like other mammals only a few thousand years ago and who still share most underlying basic behavioral characteristics such as hunger and pain.

“The fundamental logic of mammalian brains is the same, particularly when it comes to basic behaviors such as eating, sleeping and social behaviors” said Dong, who noted that similar studies in humans have thus far not gotten to the cellular level. “There are lots of organizing principles to brain structures that we are just beginning to understand.”

The researchers identified the brain subnetworks based on their high degree of interconnectivity — though relatively independent, several structures provide communication routes through which the subnetworks interact. Combined with behavioral data from past research and information about subcortical targets, these interconnections imply remarkable functional significance for the subnetworks.

Four of the eight identified subnetworks in the mouse cortex relate to sensation and movement of the body — what the researchers dub somatic sensorimotor. In particular, the researchers identified separate subnetworks for movements in the face, upper limbs, lower limbs and trunk, and whiskers. Together, these networks facilitate motor behaviors such as eating and drinking, reaching and grabbing, locomotion and exploration of the environment.

Two other subnetworks are comprised of structures located along the midline of the cerebral cortex. These medial subnetworks seem devoted to the integration of visual, auditory and somatic sensory information, according to the study. Several other structures located along the side of the brain form two lateral subnetworks, one of which potentially serves to regulate the internal status of the body (i.e., taste, hunger, visceral information) and the other as a “mega-integration” subnetwork that allows the interaction of information from nearly the entire cortex.

Filed under cerebral cortex brain mapping neural networks neuroimaging neurons neuroscience science

147 notes

Study uncovers surprising differences in brain activity of alcohol-dependent women

A new Indiana University study that examines the brain activity of alcohol-dependent women compared to women who were not addicted found stark and surprising differences, leading to intriguing questions about brain network functions of addicted women as they make risky decisions about when and what to drink.

image

The study used functional magnetic resonance imaging, or fMRI, to study differences between patterns of brain network activation in the two groups of women. The findings indicate that the anterior insular region of the brain may be implicated in the process, suggesting a possible new target of treatment for alcohol-dependent women.

"We see that the network dynamics of alcohol-dependent women may be really different from that of healthy controls in a drinking-related task," said Lindsay Arcurio, a graduate student in the Department of Psychological and Brain Sciences. "We have evidence to suggest alcohol-dependent women have trouble switching between networks of the brain."

The research is part of a larger new effort to understand the differences between men and women with respect to alcohol. Arcurio said most of the research on alcohol dependence has been conducted with men or groups of men and women. Yet several factors make looking at women “really important.”

One such factor is that the physiological effects of drinking alcohol, which include liver damage, heart disease or breast cancer, set in much earlier in women than in men. For this reason, the suggested limit on the number of drinks per week that women can safely consume is eight, whereas for men, it is 14. Secondly, binge-drinking in women is on the rise. One in five adolescent girls is binge-drinking three times a month. In women between the ages of 18 and 54, that number is one in eight.

A ‘sledgehammer’ approach to defining differences in brain network activation

Research on decision-making mechanisms in alcohol-dependent individuals typically involves a general risk-taking situation in which money or points are at stake. In this study, participants were placed in the fMRI brain scanner and asked to consider low-risk and high-risk situations specifically related to alcohol — what the researchers describe as “ecological” tasks. Participants were then asked to make decisions regarding control stimuli — food as well as a presumably neutral stimuli, a stapler — to observe whether risky behavior was greater with respect to drinking than with these other items. The same picture cues were used to present high-risk and low-risk scenarios, and these two extremes were as follows:

For the low-risk situation, participants were told: Imagine you are at a bar. You are offered a drink, already paid for, with two shots of alcohol, and you have a safe ride home. For the high-risk, they were told: You are at a bar and are offered a drink already paid for, with six shots of alcohol, but you do not have a safe ride home.

The reason for such an extreme contrast between the two situations, Arcurio said, is that “as one of the first ecological tasks used in the scanner, we wanted to take a sledgehammer approach to really find the differences between cases that are definitely high-risk and those that are definitely low-risk.”

The findings, however, reflect an equally sharp contrast in differences between the brain network activation in alcohol-dependent women versus the controls.

For the control group, high-risk decisions to drink led to the deactivation of regions associated with “approach behavior,” deciding to take the drink in a risky situation. Conversely, women in the control group activate regions associated with the default mode network, a region traditionally thought to involve resting-state behavior or inactive or relaxed mental state, but which some now speculate plays a role in conceptualizing one’s future.

"It gets really interesting," Arcurio said, "comparing this pattern of activation to those in alcohol-dependent women, who behaviorally say they’re more likely to take the high-risk drink compared to the controls. They don’t deactivate anything. In contrast to the controls, alcohol-dependent women activate all three regions in question. They activate regions associated with reward (which release dopamine). They also activate frontal control regions involved in cognitive control and regions associated with the default mode network, involved in resting-state behavior. They are activating everything."

The investigators infer from these findings that alcohol-dependent women have trouble switching between networks. Being unable to activate one region and deactivate another in response to an alcohol-related situation means they are unable to use one strategy over another.

Furthermore, Arcurio said, “a lot of evidence suggests that switching between networks is influenced by the anterior insular and anterior cingulate regions of the brain, and we did find major differences in the insula between the alcohol-dependent women and controls. We’re thinking the issue is pinpointed to that region.”

The researchers are now running analyses to test the hypothesis that the insula helps in this process, which could offer new possibilities for intervention, with both behavioral therapy and medication.

The research is part of a whole research program, both planned and in the works, to further explore the questions about risky decision-making in alcohol-dependent women: studies of adolescent drinking, risky sexual behavior in alcohol-dependent women, the interaction of visual networks with decision-making networks, as well as the way music (or auditory networks) interacts with decision-making mechanisms in alcohol-dependent women. In the latter experiment, college-age participants choose a song that they associate with drinking and one with quiet reflection.

"There’s a lot of Miley Cyrus in the first category," Arcurio said.

(Source: news.indiana.edu)

Filed under alcohol dependence addiction brain activity neuroimaging dopamine decision making neuroscience science

586 notes

Why does the brain remember dreams?
Some people recall a dream every morning, whereas others rarely recall one. A team led by Perrine Ruby, an Inserm Research Fellow at the Lyon Neuroscience Research Center (Inserm/CNRS/Université Claude Bernard Lyon 1), has studied the brain activity of these two types of dreamers in order to understand the differences between them. In a study published in the journal Neuropsychopharmacology, the researchers show that the temporo-parietal junction, an information-processing hub in the brain, is more active in high dream recallers. Increased activity in this brain region might facilitate attention orienting toward external stimuli and promote intrasleep wakefulness, thereby facilitating the encoding of dreams in memory.
The reason for dreaming is still a mystery for the researchers who study the difference between “high dream recallers,” who recall dreams regularly, and “low dream recallers,” who recall dreams rarely. In January 2013 (work published in the journal Cerebral Cortex), the team led by Perrine Ruby, Inserm researcher at the Lyon Neuroscience Research Center, made the following two observations: “high dream recallers” have twice as many time of wakefulness during sleep as “low dream recallers” and their brains are more reactive to auditory stimuli during sleep and wakefulness. This increased brain reactivity may promote awakenings during the night, and may thus facilitate memorisation of dreams during brief periods of wakefulness. 
In this new study, the research team sought to identify which areas of the brain differentiate high and low dream recallers. They used Positron Emission Tomography (PET) to measure the spontaneous brain activity of 41 volunteers during wakefulness and sleep. The volunteers were classified into 2 groups: 21 “high dream recallers” who recalled dreams 5.2 mornings  per week in average, and 20 “low dream recallers,” who reported 2 dreams per month in average. High dream recallers, both while awake and while asleep, showed stronger spontaneous brain activity in the medial prefrontal cortex (mPFC) and in the temporo-parietal junction (TPJ), an area of the brain involved in attention orienting toward external stimuli.

Why does the brain remember dreams?

Some people recall a dream every morning, whereas others rarely recall one. A team led by Perrine Ruby, an Inserm Research Fellow at the Lyon Neuroscience Research Center (Inserm/CNRS/Université Claude Bernard Lyon 1), has studied the brain activity of these two types of dreamers in order to understand the differences between them. In a study published in the journal Neuropsychopharmacology, the researchers show that the temporo-parietal junction, an information-processing hub in the brain, is more active in high dream recallers. Increased activity in this brain region might facilitate attention orienting toward external stimuli and promote intrasleep wakefulness, thereby facilitating the encoding of dreams in memory.

The reason for dreaming is still a mystery for the researchers who study the difference between “high dream recallers,” who recall dreams regularly, and “low dream recallers,” who recall dreams rarely. In January 2013 (work published in the journal Cerebral Cortex), the team led by Perrine Ruby, Inserm researcher at the Lyon Neuroscience Research Center, made the following two observations: “high dream recallers” have twice as many time of wakefulness during sleep as “low dream recallers” and their brains are more reactive to auditory stimuli during sleep and wakefulness. This increased brain reactivity may promote awakenings during the night, and may thus facilitate memorisation of dreams during brief periods of wakefulness.

In this new study, the research team sought to identify which areas of the brain differentiate high and low dream recallers. They used Positron Emission Tomography (PET) to measure the spontaneous brain activity of 41 volunteers during wakefulness and sleep. The volunteers were classified into 2 groups: 21 “high dream recallers” who recalled dreams 5.2 mornings  per week in average, and 20 “low dream recallers,” who reported 2 dreams per month in average. High dream recallers, both while awake and while asleep, showed stronger spontaneous brain activity in the medial prefrontal cortex (mPFC) and in the temporo-parietal junction (TPJ), an area of the brain involved in attention orienting toward external stimuli.

Filed under dreams dreaming neuroimaging sleep memory medial prefrontal cortex psychology neuroscience science

124 notes

Promise of a bonus counter-productive in brains with high dopamine levels
Some people perform better and others worse when promised a high bonus. Brain researcher Esther Aarts of the Donders Institute in Nijmegen has demonstrated for the first time that the amount of dopamine in the brain plays a role in this regard. The journal Psychological Science will publish the results on February 13.
It has been known for some time that not everyone performs better after being promised a bonus. Scientists have published contradictory results regarding the cause. The study by Esther Aarts now shows that the differences can be explained by differences in the level of dopamine in the brain. People with a high level of dopamine in a specific brain region – the striatum – perform worse after a being promised a bonus, and people with a low level of dopamine in the same area perform better. Aarts used a PET (Positron Emission Tomography) scanner to examine the amount of dopamine in the brains of subjects. She conducted this research in Berkeley, California (USA), where she worked as a post-doctoral researcher for two years.
Overdose of dopamineThe promise of a bonus provides an additional spurt of the ‘motivation substance’ dopamine in the brain. ‘For people who usually have high levels of dopamine, the promise of a bonus causes a type of dopamine overdose in the striatum’, explains Aarts. ‘Our test subjects were asked to perform a task that required considerable concentration. An overdose of dopamine makes this difficult. People who usually have less dopamine are less likely to have an overdose of dopamine, and they therefore perform better after being promised a bonus.’
Concentration desiredTest subjects performed a computer task that elicited conflicting reactions, therefore requiring considerable concentration: an arrow appears on the screen, pointing either left or right. The word ‘left’ or ‘right’ is written in the middle of the arrow. Subjects were asked to ignore the direction indicated by the arrow and mention only the direction described by the word. For half of the attempts, a bonus of 15 cents was promised for a correct answer. In the other half, the subjects received only 1 cent for each correct answer. People who usually have a high level of dopamine performed better in the low-pay condition than they did in the high-pay condition. The reverse was observed for people with low levels of dopamine: they performed better with high rewards than they did with low rewards.
Flexibility or focus‘This knowledge could make it possible to apply bonuses more effectively, but it would require observing the standard dopamine levels of people, as well as the nature of the task that they must perform’, reports Aarts. ‘It makes quite a difference whether the task is flexible and creative or whether it requires a great deal of focus. Our research shows how people perform on tasks that require considerable focus’. Given the high cost of PET scans, Aarts is now looking for easier ways of measuring dopamine levels. ‘I hope to be able to relate dopamine levels to scores on questionnaires. In the future, this might eliminate the need for PET scans for determining the quantity of dopamine in the brain’.

Promise of a bonus counter-productive in brains with high dopamine levels

Some people perform better and others worse when promised a high bonus. Brain researcher Esther Aarts of the Donders Institute in Nijmegen has demonstrated for the first time that the amount of dopamine in the brain plays a role in this regard. The journal Psychological Science will publish the results on February 13.

It has been known for some time that not everyone performs better after being promised a bonus. Scientists have published contradictory results regarding the cause. The study by Esther Aarts now shows that the differences can be explained by differences in the level of dopamine in the brain. People with a high level of dopamine in a specific brain region – the striatum – perform worse after a being promised a bonus, and people with a low level of dopamine in the same area perform better. Aarts used a PET (Positron Emission Tomography) scanner to examine the amount of dopamine in the brains of subjects. She conducted this research in Berkeley, California (USA), where she worked as a post-doctoral researcher for two years.

Overdose of dopamine
The promise of a bonus provides an additional spurt of the ‘motivation substance’ dopamine in the brain. ‘For people who usually have high levels of dopamine, the promise of a bonus causes a type of dopamine overdose in the striatum’, explains Aarts. ‘Our test subjects were asked to perform a task that required considerable concentration. An overdose of dopamine makes this difficult. People who usually have less dopamine are less likely to have an overdose of dopamine, and they therefore perform better after being promised a bonus.’

Concentration desired
Test subjects performed a computer task that elicited conflicting reactions, therefore requiring considerable concentration: an arrow appears on the screen, pointing either left or right. The word ‘left’ or ‘right’ is written in the middle of the arrow. Subjects were asked to ignore the direction indicated by the arrow and mention only the direction described by the word. For half of the attempts, a bonus of 15 cents was promised for a correct answer. In the other half, the subjects received only 1 cent for each correct answer. People who usually have a high level of dopamine performed better in the low-pay condition than they did in the high-pay condition. The reverse was observed for people with low levels of dopamine: they performed better with high rewards than they did with low rewards.

Flexibility or focus
‘This knowledge could make it possible to apply bonuses more effectively, but it would require observing the standard dopamine levels of people, as well as the nature of the task that they must perform’, reports Aarts. ‘It makes quite a difference whether the task is flexible and creative or whether it requires a great deal of focus. Our research shows how people perform on tasks that require considerable focus’. Given the high cost of PET scans, Aarts is now looking for easier ways of measuring dopamine levels. ‘I hope to be able to relate dopamine levels to scores on questionnaires. In the future, this might eliminate the need for PET scans for determining the quantity of dopamine in the brain’.

Filed under dopamine striatum neuroimaging neuroscience science

366 notes

Understanding the basic biology of bipolar disorder
Scientists know there is a strong genetic component to bipolar disorder, but they have had an extremely difficult time identifying the genes that cause it. So, in an effort to better understand the illness’s genetic causes, researchers at UCLA tried a new approach.
Instead of only using a standard clinical interview to determine whether individuals met the criteria for a clinical diagnosis of bipolar disorder, the researchers combined the results from brain imaging, cognitive testing, and an array of temperament and behavior measures. Using the new method, UCLA investigators — working with collaborators from UC San Francisco, Colombia’s University of Antioquia and the University of Costa Rica — identified about 50 brain and behavioral measures that are both under strong genetic control and associated with bipolar disorder. Their discoveries could be a major step toward identifying the specific genes that contribute to the illness.
The results are published in the Feb. 12 edition of the journal JAMA Psychiatry.
A severe mental illness that affects about 1 to 2 percent of the population, bipolar disorder causes unusual shifts in mood and energy, and it interferes with the ability to carry out everyday tasks. Those with the disorder can experience tremendous highs and extreme lows — to the point of not wanting to get out of bed when they’re feeling down. The genetic causes of bipolar disorder are highly complex and likely involve many different genes, said Carrie Bearden, a senior author of the study and an associate professor of psychiatry and psychology at the UCLA Semel Institute for Neuroscience and Human Behavior.
"The field of psychiatric genetics has long struggled to find an effective approach to begin dissecting the genetic basis of bipolar disorder," Bearden said. "This is an innovative approach to identifying genetically influenced brain and behavioral measures that are more closely tied to the underlying biology of bipolar disorder than the clinical symptoms alone are."
The researchers assessed 738 adults, 181 of whom have severe bipolar disorder. They used high-resolution 3-D images of the brain, questionnaires evaluating temperament and personality traits of individuals diagnosed with bipolar disorder and their non-bipolar relatives, and an extensive battery of cognitive tests assessing long-term memory, attention, inhibitory control and other neurocognitive abilities.
Approximately 50 of these measures showed strong evidence of being influenced by genetics. Particularly interesting was the discovery that the thickness of the gray matter in the brain’s temporal and prefrontal regions — the structures that are critical for language and for higher-order cognitive functions like self-control and problem-solving — were the most promising candidate traits for genetic mapping, based on both their strong genetic basis and association with the disease.
"These findings are really just the first step in getting us a little closer to the roots of bipolar disorder," Bearden said. "What was really exciting about this project was that we were able to collect the most extensive set of traits associated with bipolar disorder ever assessed within any study sample. These data will be a really valuable resource for the field."
The individuals assessed in this study are members of large families living in Costa Rica’s central valley and Antioquia, Colombia. The families were founded by European and native Amerindian populations about 400 years ago and have a very high incidence of bipolar disorder. The groups were chosen because they have remained fairly isolated since their founding and their genetics are therefore simpler for scientists to study than those of general populations.
The fact that the findings aligned so closely with those of previous, smaller studies in other populations was surprising even to the scientists, given the subjects’ unique genetic background and living environments.
"This suggests that even if the specific genetic variants we identify may be unique to this population, the biological pathways they disrupt are likely to also influence disease risk in other populations," Bearden said.
The researchers’ next step is to use the genomic data they collected from the families — including full genome sequences and gene expression data— to begin identifying the specific genes that contribute to risk for bipolar disorder. The researchers also plan to extend their investigation into the children and teens in these families. They hypothesize that many of the bipolar-related brain and behavioral differences found in adults with bipolar disorder had their origins in adolescent neurodevelopment.

Understanding the basic biology of bipolar disorder

Scientists know there is a strong genetic component to bipolar disorder, but they have had an extremely difficult time identifying the genes that cause it. So, in an effort to better understand the illness’s genetic causes, researchers at UCLA tried a new approach.

Instead of only using a standard clinical interview to determine whether individuals met the criteria for a clinical diagnosis of bipolar disorder, the researchers combined the results from brain imaging, cognitive testing, and an array of temperament and behavior measures. Using the new method, UCLA investigators — working with collaborators from UC San Francisco, Colombia’s University of Antioquia and the University of Costa Rica — identified about 50 brain and behavioral measures that are both under strong genetic control and associated with bipolar disorder. Their discoveries could be a major step toward identifying the specific genes that contribute to the illness.

The results are published in the Feb. 12 edition of the journal JAMA Psychiatry.

A severe mental illness that affects about 1 to 2 percent of the population, bipolar disorder causes unusual shifts in mood and energy, and it interferes with the ability to carry out everyday tasks. Those with the disorder can experience tremendous highs and extreme lows — to the point of not wanting to get out of bed when they’re feeling down. The genetic causes of bipolar disorder are highly complex and likely involve many different genes, said Carrie Bearden, a senior author of the study and an associate professor of psychiatry and psychology at the UCLA Semel Institute for Neuroscience and Human Behavior.

"The field of psychiatric genetics has long struggled to find an effective approach to begin dissecting the genetic basis of bipolar disorder," Bearden said. "This is an innovative approach to identifying genetically influenced brain and behavioral measures that are more closely tied to the underlying biology of bipolar disorder than the clinical symptoms alone are."

The researchers assessed 738 adults, 181 of whom have severe bipolar disorder. They used high-resolution 3-D images of the brain, questionnaires evaluating temperament and personality traits of individuals diagnosed with bipolar disorder and their non-bipolar relatives, and an extensive battery of cognitive tests assessing long-term memory, attention, inhibitory control and other neurocognitive abilities.

Approximately 50 of these measures showed strong evidence of being influenced by genetics. Particularly interesting was the discovery that the thickness of the gray matter in the brain’s temporal and prefrontal regions — the structures that are critical for language and for higher-order cognitive functions like self-control and problem-solving — were the most promising candidate traits for genetic mapping, based on both their strong genetic basis and association with the disease.

"These findings are really just the first step in getting us a little closer to the roots of bipolar disorder," Bearden said. "What was really exciting about this project was that we were able to collect the most extensive set of traits associated with bipolar disorder ever assessed within any study sample. These data will be a really valuable resource for the field."

The individuals assessed in this study are members of large families living in Costa Rica’s central valley and Antioquia, Colombia. The families were founded by European and native Amerindian populations about 400 years ago and have a very high incidence of bipolar disorder. The groups were chosen because they have remained fairly isolated since their founding and their genetics are therefore simpler for scientists to study than those of general populations.

The fact that the findings aligned so closely with those of previous, smaller studies in other populations was surprising even to the scientists, given the subjects’ unique genetic background and living environments.

"This suggests that even if the specific genetic variants we identify may be unique to this population, the biological pathways they disrupt are likely to also influence disease risk in other populations," Bearden said.

The researchers’ next step is to use the genomic data they collected from the families — including full genome sequences and gene expression data— to begin identifying the specific genes that contribute to risk for bipolar disorder. The researchers also plan to extend their investigation into the children and teens in these families. They hypothesize that many of the bipolar-related brain and behavioral differences found in adults with bipolar disorder had their origins in adolescent neurodevelopment.

Filed under bipolar disorder mental health neuroimaging gray matter psychology neuroscience science

714 notes

Meditation helps pinpoint neurological differences between two types of love
These findings won’t appear on any Hallmark card, but romantic love tends to activate the same reward areas of the brain as cocaine, research has shown.
Now Yale School of Medicine researchers studying meditators have found that a more selfless variety of love — a deep and genuine wish for the happiness of others without expectation of reward — actually turns off the same reward areas that light up when lovers see each other.
“When we truly, selflessly wish for the well-being of others, we’re not getting that same rush of excitement that comes with, say, a tweet from our romantic love interest, because it’s not about us at all,” said Judson Brewer, adjunct professor of psychiatry at Yale now at the University of Massachusetts.
Brewer and Kathleen Garrison, postdoctoral researcher in Yale’s Department of Psychiatry, report their findings in a paper scheduled to be published online Feb. 12 in the journal Brain and Behavior.
The neurological boundaries between these two types of love become clear in fMRI scans of experienced meditators. The reward centers of the brain that are strongly activated by a lover’s face (or a picture of cocaine) are almost completely turned off when a meditator is instructed to silently repeat sayings such as “May all beings be happy.”
Such mindfulness meditations are a staple of Buddhism and are now commonly practiced in Western stress reduction programs, Brewer notes. The tranquility of this selfless love for others — exemplified in such religious figures such as Mother Theresa or the Dalai Llama — is diametrically opposed to the anxiety caused by a lovers’ quarrel or extended separation. And it carries its own rewards.
“The intent of this practice is to specifically foster selfless love — just putting it out there and not looking for or wanting anything in return,” Brewer said. “If you’re wondering where the reward is in being selfless, just reflect on how it feels when you see people out there helping others, or even when you hold the door for somebody the next time you are at Starbucks.”

Meditation helps pinpoint neurological differences between two types of love

These findings won’t appear on any Hallmark card, but romantic love tends to activate the same reward areas of the brain as cocaine, research has shown.

Now Yale School of Medicine researchers studying meditators have found that a more selfless variety of love — a deep and genuine wish for the happiness of others without expectation of reward — actually turns off the same reward areas that light up when lovers see each other.

“When we truly, selflessly wish for the well-being of others, we’re not getting that same rush of excitement that comes with, say, a tweet from our romantic love interest, because it’s not about us at all,” said Judson Brewer, adjunct professor of psychiatry at Yale now at the University of Massachusetts.

Brewer and Kathleen Garrison, postdoctoral researcher in Yale’s Department of Psychiatry, report their findings in a paper scheduled to be published online Feb. 12 in the journal Brain and Behavior.

The neurological boundaries between these two types of love become clear in fMRI scans of experienced meditators. The reward centers of the brain that are strongly activated by a lover’s face (or a picture of cocaine) are almost completely turned off when a meditator is instructed to silently repeat sayings such as “May all beings be happy.”

Such mindfulness meditations are a staple of Buddhism and are now commonly practiced in Western stress reduction programs, Brewer notes. The tranquility of this selfless love for others — exemplified in such religious figures such as Mother Theresa or the Dalai Llama — is diametrically opposed to the anxiety caused by a lovers’ quarrel or extended separation. And it carries its own rewards.

“The intent of this practice is to specifically foster selfless love — just putting it out there and not looking for or wanting anything in return,” Brewer said. “If you’re wondering where the reward is in being selfless, just reflect on how it feels when you see people out there helping others, or even when you hold the door for somebody the next time you are at Starbucks.”

Filed under meditation loving kindness fMRI reward system neuroimaging psychology neuroscience science

255 notes

How our brain networks: Research reveals white matter ‘scaffold’ of human brain 
For the first time, neuroscientists have systematically identified the white matter “scaffold” of the human brain, the critical communications network that supports brain function.
Their work, published Feb. 11 in the open-source journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease. By detailing the connections that have the greatest influence over all other connections, the researchers offer not only a landmark first map of core white matter pathways, but also show which connections may be most vulnerable to damage.
"We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function," said senior author John Darrell Van Horn of the USC Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging at USC.
"While all connections in the brain have their importance, there are particular links which are the major players," Van Horn said.
Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the USC Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.
They found that the most important areas of white and gray matter don’t always overlap. Gray matter is the outermost portion of the brain containing the neurons where information is processed and stored. Past research has identified the areas of gray matter that are disproportionately affected by injury.
But the current study shows that the most vulnerable white matter pathways – the core “scaffolding” – are not necessarily just the connections among the most vulnerable areas of gray matter, helping explain why seemingly small brain injuries may have such devastating effects.
"Sometimes people experience a head injury which seems severe but from which they are able to recover. On the other hand, some people have a seemingly small injury which has very serious clinical effects," says Van Horn, associate professor of neurology at the Keck School of Medicine of USC. "This research helps us to better address clinical challenges such as traumatic brain injury and to determine what makes certain white matter pathways particularly vulnerable and important."
The researchers compare their brain imaging analysis to models used for understanding social networks. To get a sense of how the brain works, Irimia and Van Horn did not focus only on the most prominent gray matter nodes – which are akin to the individuals within a social network. Nor did they merely look at how connected those nodes are.
Rather, they also examined the strength of these white matter connections, i.e. which connections seemed to be particularly sensitive or to cause the greatest repercussions across the network when removed. Those connections which created the greatest changes form the network “scaffold.”
"Just as when you remove the internet connection to your computer you won’t get your email anymore, there are white matter pathways which result in large scale communication failures in the brain when damaged," Van Horn said.
When white matter pathways are damaged, brain areas served by those connections may wither or have their functions taken over by other brain regions, the researchers explain. Irimia and Van Horn’s research on core white matter connections is part of a worldwide scientific effort to map the 100 billion neurons and 1,000 trillion connections in the living human brain, led by the Human Connectome Project and the Laboratory of Neuro Imaging at USC.
Irimia notes that, “these new findings on the brain’s network scaffold help inform clinicians about the neurological impacts of brain diseases such as multiple sclerosis, Alzheimer’s disease, as well as major brain injury. Sports organizations, the military and the US government have considerable interest in understanding brain disorders, and our work contributes to that of other scientists in this exciting era for brain research.”

How our brain networks: Research reveals white matter ‘scaffold’ of human brain

For the first time, neuroscientists have systematically identified the white matter “scaffold” of the human brain, the critical communications network that supports brain function.

Their work, published Feb. 11 in the open-source journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease. By detailing the connections that have the greatest influence over all other connections, the researchers offer not only a landmark first map of core white matter pathways, but also show which connections may be most vulnerable to damage.

"We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function," said senior author John Darrell Van Horn of the USC Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging at USC.

"While all connections in the brain have their importance, there are particular links which are the major players," Van Horn said.

Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the USC Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.

They found that the most important areas of white and gray matter don’t always overlap. Gray matter is the outermost portion of the brain containing the neurons where information is processed and stored. Past research has identified the areas of gray matter that are disproportionately affected by injury.

But the current study shows that the most vulnerable white matter pathways – the core “scaffolding” – are not necessarily just the connections among the most vulnerable areas of gray matter, helping explain why seemingly small brain injuries may have such devastating effects.

"Sometimes people experience a head injury which seems severe but from which they are able to recover. On the other hand, some people have a seemingly small injury which has very serious clinical effects," says Van Horn, associate professor of neurology at the Keck School of Medicine of USC. "This research helps us to better address clinical challenges such as traumatic brain injury and to determine what makes certain white matter pathways particularly vulnerable and important."

The researchers compare their brain imaging analysis to models used for understanding social networks. To get a sense of how the brain works, Irimia and Van Horn did not focus only on the most prominent gray matter nodes – which are akin to the individuals within a social network. Nor did they merely look at how connected those nodes are.

Rather, they also examined the strength of these white matter connections, i.e. which connections seemed to be particularly sensitive or to cause the greatest repercussions across the network when removed. Those connections which created the greatest changes form the network “scaffold.”

"Just as when you remove the internet connection to your computer you won’t get your email anymore, there are white matter pathways which result in large scale communication failures in the brain when damaged," Van Horn said.

When white matter pathways are damaged, brain areas served by those connections may wither or have their functions taken over by other brain regions, the researchers explain. Irimia and Van Horn’s research on core white matter connections is part of a worldwide scientific effort to map the 100 billion neurons and 1,000 trillion connections in the living human brain, led by the Human Connectome Project and the Laboratory of Neuro Imaging at USC.

Irimia notes that, “these new findings on the brain’s network scaffold help inform clinicians about the neurological impacts of brain diseases such as multiple sclerosis, Alzheimer’s disease, as well as major brain injury. Sports organizations, the military and the US government have considerable interest in understanding brain disorders, and our work contributes to that of other scientists in this exciting era for brain research.”

Filed under white matter TBI brain injury gray matter neuroimaging connectomics neuroscience science

559 notes

How Your Memory Rewrites the Past

Your memory is a wily time traveler, plucking fragments of the present and inserting them into the past, reports a new Northwestern Medicine® study. In terms of accuracy, it’s no video camera.

Rather, the memory rewrites the past with current information, updating your recollections with new experiences. 

Love at first sight, for example, is more likely a trick of your memory than a Hollywood-worthy moment.

“When you think back to when you met your current partner, you may recall this feeling of love and euphoria,” said lead author Donna Jo Bridge, a postdoctoral fellow in medical social sciences at Northwestern University Feinberg School of Medicine. “But you may be projecting your current feelings back to the original encounter with this person.”

The study is published Feb. 5 in the Journal of Neuroscience.

This the first study to show specifically how memory is faulty, and how it can insert things from the present into memories of the past when those memories are retrieved. The study shows the exact point in time when that incorrectly recalled information gets implanted into an existing memory.

To help us survive, Bridge said, our memories adapt to an ever-changing environment and help us deal with what’s important now.

“Our memory is not like a video camera,” Bridge said. “Your memory reframes and edits events to create a story to fit your current world. It’s built to be current.”

All that editing happens in the hippocampus, the new study found. The hippocampus, in this function, is the memory’s equivalent of a film editor and special effects team.

For the experiment, 17 men and women studied 168 object locations on a computer screen with varied backgrounds such as an underwater ocean scene or an aerial view of Midwest farmland. Next, researchers asked participants to try to place the object in the original location but on a new background screen. Participants would always place the objects in an incorrect location.

For the final part of the study, participants were shown the object in three locations on the original screen and asked to choose the correct location. Their choices were: the location they originally saw the object, the location they placed it in part 2 or a brand new location.

“People always chose the location they picked in part 2,” Bridge said. “This shows their original memory of the location has changed to reflect the location they recalled on the new background screen. Their memory has updated the information by inserting the new information into the old memory.”

Participants took the test in an MRI scanner so scientists could observe their brain activity. Scientists also tracked participants’ eye movements, which sometimes were more revealing about the content of their memories – and if there was conflict in their choices — than the actual location they ended up choosing.   

The notion of a perfect memory is a myth, said Joel Voss, senior author of the paper and an assistant professor of medical social sciences and of neurology at Feinberg.

“Everyone likes to think of memory as this thing that lets us vividly remember our childhoods or what we did last week,” Voss said. “But memory is designed to help us make good decisions in the moment and, therefore, memory has to stay up-to-date. The information that is relevant right now can overwrite what was there to begin with.”

Bridge noted the study’s implications for eyewitness court testimony. “Our memory is built to change, not regurgitate facts, so we are not very reliable witnesses,” she said.

A caveat of the research is that it was done in a controlled experimental setting and shows how memories changed within the experiment. “Although this occurred in a laboratory setting, it’s reasonable to think the memory behaves like this in the real world,” Bridge said.

(Source: northwestern.edu)

Filed under memory hippocampus brain activity neuroimaging psychology neuroscience science

free counters