Neuroscience

Articles and news from the latest research reports.

Posts tagged science

115 notes

This Is Your Brain on Snacks—Brain Stimulation Affects Craving and Consumption

Magnetic stimulation of a brain area involved in “executive function” affects cravings for and consumption of calorie-dense snack foods, reports a study in the September issue of Psychosomatic Medicine: Journal of Biobehavioral Medicine, the official journal of the American Psychosomatic Society. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

image

After stimulation of the dorsolateral prefrontal cortex (DLPFC), young women experience increased cravings for high-calorie snacks—and eat more of those foods when given the opportunity, according to the study by researchers at University of Waterloo, Ont., Canada. “These findings shed a light on the role of the DLPFC in food cravings (specifically reward anticipation), the consumption of appealing high caloric foods, and the relation between self-control and food consumption,” the researchers write. The senior author was Peter Hall, PhD.

Brain Stimulation Affects Cravings and Consumption for ‘Appetitive’ Snacks

The study included 21 healthy young women, selected because they reported strong and frequent cravings for chocolate and potato chips. Such “appetitive,” calorie-dense snack foods are often implicated in the development of obesity.

The women were shown pictures of these foods to stimulate cravings. The researchers then applied a type of magnetic stimulation, called continuous theta-burst stimulation, to decrease activity in the DLPFC. Previous studies have suggested that DLPFC activity plays a role in regulating food cravings.

After theta-burst stimulation, the women reported stronger food cravings—specifically for “appetitive” milk chocolate and potato chips. During a subsequent “taste test,” they consumed more of these foods, rather than alternative, less-appetitive foods (dark chocolate and soda crackers).

Stimulation to weaken DLPFC activity was also associated with lower performance on a test of inhibitory control strength (the Stroop test). Decreased DLPFC activity appeared to be associated with increased “reward sensitivity”—it made the participants “more sensitive to the rewarding properties of palatable high caloric foods,” the researchers write.

Weak Executive Function May Contribute to Obesity Risk

The results highlight the role of executive function in governing “dietary self-restraint,” the researchers believe. Executive function, which involves the DLPFC, refers to a set of cognitive functions that enable “top-down” control of action, emotion, and thought.

At the “basic neurobiological level,” the study provides direct evidence that the DLPFC is involved in one specific aspect of food cravings: reward anticipation. People with weak executive function may lack the dietary self-control necessary to regulate snack food consumption in “the modern obesogenic environment.” Faced with constant cues and opportunities to consume energy-dense foods, such individuals may be more likely to become overweight or obese.

The results suggest that interventions aimed at enhancing or preserving DLPFC function may help to prevent obesity and related diseases. In conditions such as type 2 diabetes, where healthy dietary habits are essential for effective disease control, “Interventions focused on enhancing DLPFC activity, through aerobic exercise or other means, may result in increased dietary self-control and subsequently improve disease management,” Dr Hall and coauthors add.

(Source: newswise.com)

Filed under food consumption prefrontal cortex executive function brain stimulation self-control psychology neuroscience science

79 notes

Researcher Develops and Proves Effectiveness of New Drug for Spinal Muscular Atrophy
According to recent studies, approximately one out of every 40 individuals in the United States is a carrier of the gene responsible for spinal muscular atrophy (SMA), a neurodegenerative disease that causes muscles to weaken over time. Now, researchers at the University of Missouri have made a recent breakthrough with the development of a new compound found to be highly effective in animal models of the disease. In April, a patent was filed for the compound for use in SMA.
“The strategy our lab is using to fight SMA is to ‘repress the repressor,’” said Chris Lorson, a researcher in the Bond Life Sciences Center and professor in the MU Department of Veterinary Pathobiology. “It’s a lot like reading a book, but in this case, the final chapter of the book—or the final exon of the genetic sequence—is omitted. The exciting part is that the important chapter is still there—and can be tricked into being read correctly, if you know how. The new SMA therapeutic compound, an antisense oligonucleotide, repairs expression of the gene affected by the disease.”
In individuals affected by SMA, the spinal motor neuron-1 (SMN1) gene is mutated and lacks the ability to process a key protein that helps muscle neurons function. Muscles in the lower extremities are usually affected first, followed by muscles in the upper extremities, including areas around the neck and spine.
Fortunately, humans have a nearly identical copy gene called SMN2. Lorson’s drug targets that specific genetic sequence and allows proper “editing” of the SMN2 gene. The drug allows the SMN2 gene to bypass the defective gene and process the protein that helps the muscle neurons function.
Lorson’s breakthrough therapeutic compound was patented in April. His research found that the earlier the treatment can be administered in mice with SMA, the better the outcome. In mice studies, the drug improved the survival rate by 500 to 700 percent, with a 90 percent improvement demonstrated in severe SMA cases, according to the study.
Although there is no cure for SMA currently, the National Institutes of Health (NIH) has listed SMA as the neurological disease closest to finding a cure, due in part to effective drugs like the one developed in Lorson’s lab.

Researcher Develops and Proves Effectiveness of New Drug for Spinal Muscular Atrophy

According to recent studies, approximately one out of every 40 individuals in the United States is a carrier of the gene responsible for spinal muscular atrophy (SMA), a neurodegenerative disease that causes muscles to weaken over time. Now, researchers at the University of Missouri have made a recent breakthrough with the development of a new compound found to be highly effective in animal models of the disease. In April, a patent was filed for the compound for use in SMA.

“The strategy our lab is using to fight SMA is to ‘repress the repressor,’” said Chris Lorson, a researcher in the Bond Life Sciences Center and professor in the MU Department of Veterinary Pathobiology. “It’s a lot like reading a book, but in this case, the final chapter of the book—or the final exon of the genetic sequence—is omitted. The exciting part is that the important chapter is still there—and can be tricked into being read correctly, if you know how. The new SMA therapeutic compound, an antisense oligonucleotide, repairs expression of the gene affected by the disease.”

In individuals affected by SMA, the spinal motor neuron-1 (SMN1) gene is mutated and lacks the ability to process a key protein that helps muscle neurons function. Muscles in the lower extremities are usually affected first, followed by muscles in the upper extremities, including areas around the neck and spine.

Fortunately, humans have a nearly identical copy gene called SMN2. Lorson’s drug targets that specific genetic sequence and allows proper “editing” of the SMN2 gene. The drug allows the SMN2 gene to bypass the defective gene and process the protein that helps the muscle neurons function.

Lorson’s breakthrough therapeutic compound was patented in April. His research found that the earlier the treatment can be administered in mice with SMA, the better the outcome. In mice studies, the drug improved the survival rate by 500 to 700 percent, with a 90 percent improvement demonstrated in severe SMA cases, according to the study.

Although there is no cure for SMA currently, the National Institutes of Health (NIH) has listed SMA as the neurological disease closest to finding a cure, due in part to effective drugs like the one developed in Lorson’s lab.

Filed under spinal muscular atrophy spinal motor neuron gene mutation SMN1 neuroscience science

60 notes

You Don’t Walk Alone

Breakthrough in detecting early onset of refractory epilepsy in children will lead to effective treatment using non-pharmacological therapies.

65 million people around the world today suffer from epilepsy, a condition of the brain that may trigger an uncontrollable seizure at any time, often for no known reason. A seizure is a disruption of the electrical communication between neurons, and someone is said to have epilepsy if they experience two or more unprovoked seizures separated by at least 24 hours.

Epilepsy is the most common chronic disease in pediatric neurology, with about 0.5-1% of children developing epilepsy during their lifetime. A further 30-40% of epileptic children develop refractory epilepsy, a particular type of epilepsy that cannot be managed by antiepileptic drugs (AED). Regardless of etiology, children with refractory epilepsy are invariably exposed to a variety of physical, psychological and social morbidities. Patients whose seizures are difficult to control could benefit from non-pharmacological therapies, including surgery, deep brain stimulation and ketogenic diets. Therefore, the early identification of patients whose seizures are refractory to AED would allow them to receive alternative therapies at an appropriate time.

Despite idiopathic etiology being a significant predictor of a lower risk of refractory epilepsy, a subset of patients with idiopathic epilepsy might still be refractory to medical treatment.

Using a new electroencephalography (EEG) analytical method, a team of medical doctors and scientists in Taiwan has successfully developed a tool to detect certain EEG features often present in children with idiopathic epilepsy.

The team developed an efficient, automated and quantitative approach towards the early prediction of refractory idiopathic epilepsy based on EEG classification analysis. EEG analysis is widely employed to investigate brain disorders and to study brain electrical activity. In the study, a set of artifact-free EEG segments was acquired from the EEG recordings of patients belonging to two classes of epilepsy: well-controlled and refractory. To search for significantly discriminative EEG features and to reduce computational costs, a statistical approach involving global parametric features was adopted across EEG channels as well as over time. A gain ratio-based feature selection was then performed.

The study found a significantly higher DecorrTime avg AVG and RelPowDelta avg AVG in the well-controlled group than in the refractory group. This suggests that refractory patients have a higher risk of seizure attacks than well-controlled patients.

The main contributions of this study are as follows:

  1. the generalisation of 10 significant EEG features into a concept for the recognition and identification of potential refractory epilepsy in patients with idiopathic epilepsy, based on EEG classification analysis;
  2. the development of a diagnostic tool based conceptually on these 10 EEG features, using a support vector machine (SVM) classification model to discriminate between well-controlled idiopathic epilepsy and refractory idiopathic epilepsy, which will facilitate subsequent expert visual EEG interpretation.

Further research with more diversity (in terms of pediatric and adult participants) is encouraged to expand on the tool’s reliability and generalisation. This study was supported partly by a grant from the Kaohsiung Medical University Hospital and grants from Ministry of Science and Technology, Taiwan.

The paper can be found in the upcoming issue of the International Journal of Neural Systems (IJNS)

Filed under epilepsy EEG epileptic seizures neuroscience science

60 notes

Network measures predict neuropsychological outcome after brain injury

Cognitive neuroscience research has shown that certain brain regions are associated with specific cognitive abilities, such as language, naming, and decision-making.

image

How and where these specific abilities are integrated in the brain to support complex cognition is still under investigation. However, researchers at the University of Iowa and Washington University in St. Louis, Missouri, believe that several hub regions may be especially important for the brain to function as an integrated network.

In research published online Sept. 15 in the Early Edition of the Proceedings of the National Academy of Sciences, scientists studied neurological patients with focal brain damage, and found that damage to six hub locations—identified in a model developed at Washington University using resting state fMRI, functional connectivity analyses, and graph theory—produced much greater cognitive impairment than damage to other locations.

Doctors have long observed that despite having similar locations or extent of brain injury, patients often present with wide-ranging degrees of impairment and exhibit different recovery trajectories. A better understanding of brain networks and hubs may improve the understanding of outcomes of brain injuries (for example, stroke, resection, or trauma) and help inform prognosis and rehabilitation efforts.

“We were able to identify a set of brain hubs and show that damage to those locations unexpectedly causes widespread cognitive impairments,” says David Warren, cognitive neuroscientist at the University of Iowa and lead study author. “We hope that this framework will help neurologists with diagnosis and prognosis, and neurosurgeons with surgical planning.”

Two contrasting views of brain hubs exist. One view focuses on the sheer number of connections between brain regions, with those regions showing the most connections considered hubs.

Warren and his colleagues contend that the number of connections a given region makes may not reflect the importance of a region to network function because it can be strongly influenced by network size. Instead, their framework defines hubs as brain regions that show correlated activity with multiple brain systems (rather than regions). The authors predicted that because hubs should be critical for brain function and complex cognition, damage to true hubs should produce widespread cognitive impairment.

This study evaluated long-term cognitive and behavioral data in 30 patients in the Iowa Neurological Patient Registry—19 with focal damage to one of the authors’ six target hub locations and 11 with damage to two control locations that fit the alternative hub definition.

On average, patients with lesions to target hubs had significant impairment in nine major cognitive domains—orientation/attention, perception, memory, language skills, motor performance, concept formation/reasoning, executive functions, emotional functions, and adaptive functions. In contrast, the group with lesions to control hubs was significantly impaired in just three of the nine domains (executive functions, emotional functions, and adaptive functions).

Additionally, the target group had significantly greater cognitive deficits than the control group in seven of nine domains (all except perception and emotional functions), again showing the widespread cognitive effects of target hub lesions.

“With a grant from the McDonnell Foundation, we’re planning to follow up by exploring the effects of damage to additional brain hubs, examining how damage to hubs alters brain activation, and studying neurosurgery patients prospectively before and after their surgeries,” says senior study author Daniel Tranel, professor of neurology in the UI Carver College of Medicine and psychology in the College of Liberal Arts and Sciences. “We think that this work could have a tremendous influence on clinical practice.”

(Source: now.uiowa.edu)

Filed under brain injury default mode network brain function cognitive impairment neuroscience science

166 notes

Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.
Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.
The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.
Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.
All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.
In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.
Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.
The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.
Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.
The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.
Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.
This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Neuroscientists identify key role of language gene

Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Filed under Foxp2 gene mutation language language acquisition speech learning neuroscience science

57 notes

Study First to Use Brain Scans to Forecast Early Reading Difficulties

UC San Francisco researchers have used brain scans to predict how young children learn to read, giving clinicians a possible tool to spot children with dyslexia and other reading difficulties before they experience reading challenges.

image

In the United States, children usually learn to read for the first time in kindergarten and become proficient readers by third grade, according to the authors. In the study, researchers examined brain scans of 38 kindergarteners as they were learning to read formally at school and tracked their white matter development until third grade. The brain’s white matter is essential for perceiving, thinking and learning.

The researchers found that the developmental course of the children’s white matter volume predicted the kindergarteners’ abilities to read.

“We show that white matter development during a critical period in a child’s life, when they start school and learn to read for the very first time, predicts how well the child ends up reading,” said Fumiko Hoeft, MD, PhD, senior author and an associate professor of child and adolescent psychiatry at UCSF, and member of the UCSF Dyslexia Center.

The research is published online in Psychological Science.

Doctors commonly use behavioral measures of reading readiness for assessments of ability. Other measures such as cognitive (i.e. IQ) ability, early linguistic skills, measures of the environment such as socio-economic status, and whether there is a family member with reading problems or dyslexia are all common early factors used to assess risk of developing reading difficulties.

“What was intriguing in this study was that brain development in regions important to reading predicted above and beyond all of these measures,” said Hoeft.

The researchers removed the effects of these commonly used assessments when doing the statistical analyses in order to assess how the white matter directly predicted future reading ability. They found that left hemisphere white matter in the temporo-parietal region just behind and above the left ear — thought to be important for language, reading and speech — was highly predictive of reading acquisition beyond effects of genetic predisposition, cognitive abilities, and environment at the outset of kindergarten. Brain scans improved prediction accuracy by 60 percent better at predicting reading difficulties than the compared to traditional assessments alone. 

“Early identification and interventions are extremely important in children with dyslexia as well as most neurodevelopmental disorders,” said Hoeft. “Accumulation of research evidence such as ours may one day help us identify kids who might be at risk for dyslexia, rather than waiting for children to become poor readers and experience failure.”

According to the National Institute of Child and Human Development, as many as 15 percent of Americans have major trouble reading.

“Examining developmental changes in the brain over a critical period of reading appears to be a unique sensitive measure of variation and may add insight to our understanding of reading development in ways that brain data from one time point, and behavioral and environmental measures, cannot,” said Chelsea Myers, BS, lead author and lab manager in UCSF’s Laboratory for Educational NeuroScience. “The hope is that understanding each child’s neurocognitive profiles will help educators provide targeted and personalized education and intervention, particularly in those with special needs.”

(Source: ucsf.edu)

Filed under reading difficulties dyslexia white matter brain development language psychology neuroscience science

108 notes

(Image caption: These figures show lagged maturation of connections in ADHD between the default mode network, involved in internally-directed thought (i.e., daydreaming) and shown on the left of each figure, and two brain networks involved in externally-focused attention, shown on the right of each figure. The width of each arc represents the number of lagged connections between two regions within each network. Connections that normally increase with age and that are hypoconnected in ADHD are shown in blue; connections that normally decrease with age and that are hyperconnected in ADHD are shown in red.)
Slow to mature, quick to distract: ADHD brain study finds slower development of key connections 
A peek inside the brains of more than 750 children and teens reveals a key difference in brain architecture between those with attention deficit hyperactivity disorder and those without.
Kids and teens with ADHD, a new study finds, lag behind others of the same age in how quickly their brains form connections within, and between, key brain networks.
The result: less-mature connections between a brain network that controls internally-directed thought (such as daydreaming) and networks that allow a person to focus on externally-directed tasks. That lag in connection development may help explain why people with ADHD get easily distracted or struggle to stay focused.
What’s more, the new findings, and the methods used to make them, may one day allow doctors to use brain scans to diagnose ADHD — and track how well someone responds to treatment. This kind of neuroimaging “biomarker” doesn’t yet exist for ADHD, or any psychiatric condition for that matter.
The new findings come from a team in the University of Michigan Medical School’s Department of Psychiatry. They used highly advanced computing techniques to analyze a large pool of detailed brain scans that were publicly shared for scientists to study. Their results are published in the Proceedings of the National Academy of Sciences.
Lead author Chandra Sripada, M.D., Ph.D., and colleagues looked at the brain scans of 275 kids and teens with ADHD, and 481 others without it, using “connectomic” methods that can map interconnectivity between networks in the brain.
The scans, made using function magnetic resonance imaging (fMRI) scanners, show brain activity during a resting state. This allows researchers to see how a number of different brain networks, each specialized for certain types of functions, were “talking” within and amongst themselves.
The researchers found lags in development of connection within the internally-focused network, called the default mode network or DMN, and in development of connections between DMN and two networks that process externally-focused tasks, often called task-positive networks, or TPNs. They could even see that the lags in connection development with the two task-related networks — the frontoparietal and ventral attention networks —  were located primarily in two specific areas of the brain.
The new findings mesh well with what other researchers have found by examining the physical structure of the brains of people with and without ADHD in other ways.
Such research has already shown alterations in regions within DMN and TPNs. So, the new findings build on that understanding and add to it. 
The findings are also relevant to thinking about the longitudinal course of ADHD from childhood to adulthood. For instance, some children and teens “grow out” of the disorder, while for others the disorder persists throughout adulthood. Future studies of brain network maturation in ADHD could shed light into the neural basis for this difference.
“We and others are interested in understanding the neural mechanisms of ADHD in hopes that we can contribute to better diagnosis and treatment,” says Sripada, an assistant professor and psychiatrist who holds a joint appointment in the U-M Philosophy department and is a member of the U-M Center for Computational Medicine and Bioinformatics. “But without the database of fMRI images, and the spirit of collaboration that allowed them to be compiled and shared, we would never have reached this point.”  
Sripada explains that in the last decade, functional medical imaging has revealed that the human brain is functionally organized into large-scale connectivity networks. These networks, and the connections between them, mature throughout early childhood all the way to young adulthood. “It is particularly noteworthy that the networks we found to have lagging maturation in ADHD are linked to the very behaviors that are the symptoms of ADHD,” he says. 
Studying the vast array of connections in the brain, a field called connectomics, requires scientists to be able to parse through not just the one-to-one communications between two specific brain regions, but the patterns of communication among thousands of nodes within the brain. This requires major computing power and access to massive amounts of data – which makes the open sharing of fMRI images so important.
“The results of this study set the stage for the next phase of this research, which is to examine individual components of the networks that have the maturational lag,” he says. “This study provides a coarse-grained understanding, and now we want to examine this phenomenon in a more fine-grained way that might lead us to a true biological marker, or neuromarker, for ADHD.”
Sripada also notes that connectomics could be used to examine other disorders with roots in brain connectivity – including autism, which some evidence has suggested stems from over-maturation of some brain networks, and schizophrenia, which may arise from abnormal connections. Pooling more fMRI data from people with these conditions, and depression, anxiety, bipolar disorder and more could boost connectomics studies in those fields.

(Image caption: These figures show lagged maturation of connections in ADHD between the default mode network, involved in internally-directed thought (i.e., daydreaming) and shown on the left of each figure, and two brain networks involved in externally-focused attention, shown on the right of each figure. The width of each arc represents the number of lagged connections between two regions within each network. Connections that normally increase with age and that are hypoconnected in ADHD are shown in blue; connections that normally decrease with age and that are hyperconnected in ADHD are shown in red.)

Slow to mature, quick to distract: ADHD brain study finds slower development of key connections

A peek inside the brains of more than 750 children and teens reveals a key difference in brain architecture between those with attention deficit hyperactivity disorder and those without.

Kids and teens with ADHD, a new study finds, lag behind others of the same age in how quickly their brains form connections within, and between, key brain networks.

The result: less-mature connections between a brain network that controls internally-directed thought (such as daydreaming) and networks that allow a person to focus on externally-directed tasks. That lag in connection development may help explain why people with ADHD get easily distracted or struggle to stay focused.

What’s more, the new findings, and the methods used to make them, may one day allow doctors to use brain scans to diagnose ADHD — and track how well someone responds to treatment. This kind of neuroimaging “biomarker” doesn’t yet exist for ADHD, or any psychiatric condition for that matter.

The new findings come from a team in the University of Michigan Medical School’s Department of Psychiatry. They used highly advanced computing techniques to analyze a large pool of detailed brain scans that were publicly shared for scientists to study. Their results are published in the Proceedings of the National Academy of Sciences.

Lead author Chandra Sripada, M.D., Ph.D., and colleagues looked at the brain scans of 275 kids and teens with ADHD, and 481 others without it, using “connectomic” methods that can map interconnectivity between networks in the brain.

The scans, made using function magnetic resonance imaging (fMRI) scanners, show brain activity during a resting state. This allows researchers to see how a number of different brain networks, each specialized for certain types of functions, were “talking” within and amongst themselves.

The researchers found lags in development of connection within the internally-focused network, called the default mode network or DMN, and in development of connections between DMN and two networks that process externally-focused tasks, often called task-positive networks, or TPNs. They could even see that the lags in connection development with the two task-related networks — the frontoparietal and ventral attention networks — were located primarily in two specific areas of the brain.

The new findings mesh well with what other researchers have found by examining the physical structure of the brains of people with and without ADHD in other ways.

Such research has already shown alterations in regions within DMN and TPNs. So, the new findings build on that understanding and add to it. 

The findings are also relevant to thinking about the longitudinal course of ADHD from childhood to adulthood. For instance, some children and teens “grow out” of the disorder, while for others the disorder persists throughout adulthood. Future studies of brain network maturation in ADHD could shed light into the neural basis for this difference.

“We and others are interested in understanding the neural mechanisms of ADHD in hopes that we can contribute to better diagnosis and treatment,” says Sripada, an assistant professor and psychiatrist who holds a joint appointment in the U-M Philosophy department and is a member of the U-M Center for Computational Medicine and Bioinformatics. “But without the database of fMRI images, and the spirit of collaboration that allowed them to be compiled and shared, we would never have reached this point.”  

Sripada explains that in the last decade, functional medical imaging has revealed that the human brain is functionally organized into large-scale connectivity networks. These networks, and the connections between them, mature throughout early childhood all the way to young adulthood. “It is particularly noteworthy that the networks we found to have lagging maturation in ADHD are linked to the very behaviors that are the symptoms of ADHD,” he says. 

Studying the vast array of connections in the brain, a field called connectomics, requires scientists to be able to parse through not just the one-to-one communications between two specific brain regions, but the patterns of communication among thousands of nodes within the brain. This requires major computing power and access to massive amounts of data – which makes the open sharing of fMRI images so important.

“The results of this study set the stage for the next phase of this research, which is to examine individual components of the networks that have the maturational lag,” he says. “This study provides a coarse-grained understanding, and now we want to examine this phenomenon in a more fine-grained way that might lead us to a true biological marker, or neuromarker, for ADHD.”

Sripada also notes that connectomics could be used to examine other disorders with roots in brain connectivity – including autism, which some evidence has suggested stems from over-maturation of some brain networks, and schizophrenia, which may arise from abnormal connections. Pooling more fMRI data from people with these conditions, and depression, anxiety, bipolar disorder and more could boost connectomics studies in those fields.

Filed under ADHD default mode network connectomics fMRI brain activity neuroscience science

179 notes

EEG Study Findings Reveal How Fear is Processed in the Brain
An estimated 8% of Americans will suffer from post traumatic stress disorder (PTSD) at some point during their lifetime. Brought on by an overwhelming or stressful event or events, PTSD is the result of altered chemistry and physiology of the brain. Understanding how threat is processed in a normal brain versus one altered by PTSD is essential to developing effective interventions. 
New research from the Center for BrainHealth at The University of Texas at Dallas published online today in Brain and Cognition illustrates how fear arises in the brain when individuals are exposed to threatening images. This novel study is the first to separate emotion from threat by controlling for the dimension of arousal, the emotional reaction provoked, whether positive or negative, in response to stimuli. Building on previous animal and human research, the study identifies an electrophysiological marker for threat in the brain.
“We are trying to find where thought exists in the mind,” explained John Hart, Jr., M.D., Medical Science Director at the Center for BrainHealth. “We know that groups of neurons firing on and off create a frequency and pattern that tell other areas of the brain what to do. By identifying these rhythms, we can correlate them with a cognitive unit such as fear.”
Utilizing electroencephalography (EEG), Dr. Hart’s research team identified theta and beta wave activity that signifies the brain’s reaction to visually threatening images. 
“We have known for a long time that the brain prioritizes threatening information over other cognitive processes,” explained Bambi DeLaRosa, study lead author. “These findings show us how this happens. Theta wave activity starts in the back of the brain, in it’s fear center – the amygdala – and then interacts with brain’s memory center - the hippocampus – before traveling to the frontal lobe where thought processing areas are engaged. At the same time, beta wave activity indicates that the motor cortex is revving up in case the feet need to move to avoid the perceived threat.” 
For the study, 26 adults (19 female, 7 male), ages 19-30 were shown 224 randomized images that were either unidentifiably scrambled or real pictures. Real pictures were separated into two categories: threatening (weapons, combat, nature or animals) and non-threatening (pleasant situations, food, nature or animals). 
While wearing an EEG cap, participants were asked to push a button with their right index finger for real items and another button with their right middle finger for nonreal/scrambled items. Shorter response times were recorded for scrambled images than the real images. There was no difference in reaction time for threatening versus non-threatening images. 
EEG results revealed that threatening images evoked an early increase in theta activity in the occipital lobe (the area in the brain where visual information is processed), followed by a later increase in theta power in the frontal lobe (where higher mental functions such as thinking, decision-making, and planning occur). A left lateralized desynchronization of the beta band, the wave pattern associated with motor behavior (like the impulse to run), also consistently appeared in the threatening condition.
This study will serve as a foundation for future work that will explore normal versus abnormal fear associated with an object in other atypical populations including individuals with PTSD.

EEG Study Findings Reveal How Fear is Processed in the Brain

An estimated 8% of Americans will suffer from post traumatic stress disorder (PTSD) at some point during their lifetime. Brought on by an overwhelming or stressful event or events, PTSD is the result of altered chemistry and physiology of the brain. Understanding how threat is processed in a normal brain versus one altered by PTSD is essential to developing effective interventions. 

New research from the Center for BrainHealth at The University of Texas at Dallas published online today in Brain and Cognition illustrates how fear arises in the brain when individuals are exposed to threatening images. This novel study is the first to separate emotion from threat by controlling for the dimension of arousal, the emotional reaction provoked, whether positive or negative, in response to stimuli. Building on previous animal and human research, the study identifies an electrophysiological marker for threat in the brain.

“We are trying to find where thought exists in the mind,” explained John Hart, Jr., M.D., Medical Science Director at the Center for BrainHealth. “We know that groups of neurons firing on and off create a frequency and pattern that tell other areas of the brain what to do. By identifying these rhythms, we can correlate them with a cognitive unit such as fear.”

Utilizing electroencephalography (EEG), Dr. Hart’s research team identified theta and beta wave activity that signifies the brain’s reaction to visually threatening images. 

“We have known for a long time that the brain prioritizes threatening information over other cognitive processes,” explained Bambi DeLaRosa, study lead author. “These findings show us how this happens. Theta wave activity starts in the back of the brain, in it’s fear center – the amygdala – and then interacts with brain’s memory center - the hippocampus – before traveling to the frontal lobe where thought processing areas are engaged. At the same time, beta wave activity indicates that the motor cortex is revving up in case the feet need to move to avoid the perceived threat.” 

For the study, 26 adults (19 female, 7 male), ages 19-30 were shown 224 randomized images that were either unidentifiably scrambled or real pictures. Real pictures were separated into two categories: threatening (weapons, combat, nature or animals) and non-threatening (pleasant situations, food, nature or animals). 

While wearing an EEG cap, participants were asked to push a button with their right index finger for real items and another button with their right middle finger for nonreal/scrambled items. Shorter response times were recorded for scrambled images than the real images. There was no difference in reaction time for threatening versus non-threatening images. 

EEG results revealed that threatening images evoked an early increase in theta activity in the occipital lobe (the area in the brain where visual information is processed), followed by a later increase in theta power in the frontal lobe (where higher mental functions such as thinking, decision-making, and planning occur). A left lateralized desynchronization of the beta band, the wave pattern associated with motor behavior (like the impulse to run), also consistently appeared in the threatening condition.

This study will serve as a foundation for future work that will explore normal versus abnormal fear associated with an object in other atypical populations including individuals with PTSD.

Filed under fear PTSD emotions EEG brainwaves amygdala motor cortex hippocampus neuroscience science

142 notes

Brain Development in Schizophrenia Strays from the Normal Path
Schizophrenia is generally considered to be a disorder of brain development and it shares many risk factors, both genetic and environmental, with other neurodevelopmental disorders such as autism and intellectual disability.
The normal path for brain development is determined by the combined effects of a complex network of genes and a wide range of environmental factors.
However, longitudinal brain imaging studies in both healthy and patient populations are required in order to map the disturbances in brain structures as they emerge, i.e., the disturbed trajectories of brain development.
A new study by an international, collaborative group of researchers has measured neurodevelopment in schizophrenia, by studying brain development during childhood and adolescence in people with and without this disorder. With access to new statistical approaches and long-term follow-up with participants, in some cases over more than a decade, the researchers were able to describe brain development patterns associated with schizophrenia.
"Specifically, this paper shows that parts of the brain’s cortex develop differently in people with schizophrenia," said first author Dr. Aaron F. Alexander-Bloch, from the National Institute of Mental Health.
"The mapping of the path that the brain follows in deviating from normal development provides important clues to the underlying causes of the disorder," said Dr. John Krystal, Editor of Biological Psychiatry.
The findings were derived by investigating the trajectory of cortical thickness growth curves in 106 patients with childhood-onset schizophrenia and a comparison group of 102 healthy volunteers.
Each participant, ranging from 7–32 years of age, had repeated imaging scans over the course of several years. Then, using over 80,000 vertices across the cortex, the researchers modeled the effect of schizophrenia on the growth curve of cortical thickness.
This revealed differences that occur within a specific group of highly-connected brain regions that mature in synchrony during typical development, but follow altered trajectories of growth in schizophrenia.
"These findings show a relationship between the hypothesis that schizophrenia is a neurodevelopmental disorder and the longstanding hypothesis – first articulated by the German anatomist Karl Wernicke in the late 19th century – that it is a disease of altered connectivity between regions of the brain," added Alexander-Bloch.
This theoretical consistency is important, as it allows researchers to better focus future studies of brain connectivity in schizophrenia, by targeting the brain regions known to be affected.

Brain Development in Schizophrenia Strays from the Normal Path

Schizophrenia is generally considered to be a disorder of brain development and it shares many risk factors, both genetic and environmental, with other neurodevelopmental disorders such as autism and intellectual disability.

The normal path for brain development is determined by the combined effects of a complex network of genes and a wide range of environmental factors.

However, longitudinal brain imaging studies in both healthy and patient populations are required in order to map the disturbances in brain structures as they emerge, i.e., the disturbed trajectories of brain development.

A new study by an international, collaborative group of researchers has measured neurodevelopment in schizophrenia, by studying brain development during childhood and adolescence in people with and without this disorder. With access to new statistical approaches and long-term follow-up with participants, in some cases over more than a decade, the researchers were able to describe brain development patterns associated with schizophrenia.

"Specifically, this paper shows that parts of the brain’s cortex develop differently in people with schizophrenia," said first author Dr. Aaron F. Alexander-Bloch, from the National Institute of Mental Health.

"The mapping of the path that the brain follows in deviating from normal development provides important clues to the underlying causes of the disorder," said Dr. John Krystal, Editor of Biological Psychiatry.

The findings were derived by investigating the trajectory of cortical thickness growth curves in 106 patients with childhood-onset schizophrenia and a comparison group of 102 healthy volunteers.

Each participant, ranging from 7–32 years of age, had repeated imaging scans over the course of several years. Then, using over 80,000 vertices across the cortex, the researchers modeled the effect of schizophrenia on the growth curve of cortical thickness.

This revealed differences that occur within a specific group of highly-connected brain regions that mature in synchrony during typical development, but follow altered trajectories of growth in schizophrenia.

"These findings show a relationship between the hypothesis that schizophrenia is a neurodevelopmental disorder and the longstanding hypothesis – first articulated by the German anatomist Karl Wernicke in the late 19th century – that it is a disease of altered connectivity between regions of the brain," added Alexander-Bloch.

This theoretical consistency is important, as it allows researchers to better focus future studies of brain connectivity in schizophrenia, by targeting the brain regions known to be affected.

Filed under schizophrenia brain development neuroimaging cortical thickness neuroscience science

80 notes

(Image caption: Shown are fMRI scans across all subjects in the study. The yellow and red areas in Section A represent parts of the brain that are activated while subjects are forming “gist memories” of pictures viewed. Section B represents areas of increased activation, shown in yellow and red, as detailed memories are being formed. Credit: Image courtesy of Jagust Lab)
Researchers find neural compensation in people with Alzheimer’s-related protein
The human brain is capable of a neural workaround that compensates for the buildup of beta-amyloid, a destructive protein associated with Alzheimer’s disease, according to a new study led by UC Berkeley researchers.
The findings, published today (Sunday, Sept. 14) in the journal Nature Neuroscience, could help explain how some older adults with beta-amyloid deposits in their brain retain normal cognitive function while others develop dementia.
“This study provides evidence that there is plasticity or compensation ability in the aging brain that appears to be beneficial, even in the face of beta-amyloid accumulation,” said study principal investigator Dr. William Jagust, a professor with joint appointments at UC Berkeley’s Helen Wills Neuroscience Institute, the School of Public Health and Lawrence Berkeley National Laboratory.
Previous studies have shown a link between increased brain activity and beta-amyloid deposits, but it was unclear whether the activity was tied to better mental performance.
The study included 22 healthy young adults and 49 older adults who had no signs of mental decline. Brain scans showed that 16 of the older subjects had beta-amyloid deposits, while the remaining 55 adults did not.
The researchers used functional magnetic resonance imaging (fMRI) to track the brain activity of subjects in the process of memorizing pictures of various scenes. Afterwards, the researchers tested the subjects’ “gist memory” by asking them to confirm whether a written description of a scene – such as a boy doing a handstand – corresponded to a picture previously viewed. Subjects were then asked to confirm whether specific written details of a scene – such as the color of the boy’s shirt – were true.
“Generally, the groups performed equally well in the tasks, but it turned out that for people with beta-amyloid deposits in the brain, the more detailed and complex their memory, the more brain activity there was,” said Jagust. “It seems that their brain has found a way to compensate for the presence of the proteins associated with Alzheimer’s.”
What remains unclear, said Jagust, is why some people with beta-amyloid deposits are better at using different parts of their brain than others. Previous studies suggest that people who engage in mentally stimulating activities throughout their lives have lower levels of beta-amyloid.
“I think it’s very possible that people who spend a lifetime involved in cognitively stimulating activity have brains that are better able to adapt to potential damage,” said Jagust.

(Image caption: Shown are fMRI scans across all subjects in the study. The yellow and red areas in Section A represent parts of the brain that are activated while subjects are forming “gist memories” of pictures viewed. Section B represents areas of increased activation, shown in yellow and red, as detailed memories are being formed. Credit: Image courtesy of Jagust Lab)

Researchers find neural compensation in people with Alzheimer’s-related protein

The human brain is capable of a neural workaround that compensates for the buildup of beta-amyloid, a destructive protein associated with Alzheimer’s disease, according to a new study led by UC Berkeley researchers.

The findings, published today (Sunday, Sept. 14) in the journal Nature Neuroscience, could help explain how some older adults with beta-amyloid deposits in their brain retain normal cognitive function while others develop dementia.

“This study provides evidence that there is plasticity or compensation ability in the aging brain that appears to be beneficial, even in the face of beta-amyloid accumulation,” said study principal investigator Dr. William Jagust, a professor with joint appointments at UC Berkeley’s Helen Wills Neuroscience Institute, the School of Public Health and Lawrence Berkeley National Laboratory.

Previous studies have shown a link between increased brain activity and beta-amyloid deposits, but it was unclear whether the activity was tied to better mental performance.

The study included 22 healthy young adults and 49 older adults who had no signs of mental decline. Brain scans showed that 16 of the older subjects had beta-amyloid deposits, while the remaining 55 adults did not.

The researchers used functional magnetic resonance imaging (fMRI) to track the brain activity of subjects in the process of memorizing pictures of various scenes. Afterwards, the researchers tested the subjects’ “gist memory” by asking them to confirm whether a written description of a scene – such as a boy doing a handstand – corresponded to a picture previously viewed. Subjects were then asked to confirm whether specific written details of a scene – such as the color of the boy’s shirt – were true.

“Generally, the groups performed equally well in the tasks, but it turned out that for people with beta-amyloid deposits in the brain, the more detailed and complex their memory, the more brain activity there was,” said Jagust. “It seems that their brain has found a way to compensate for the presence of the proteins associated with Alzheimer’s.”

What remains unclear, said Jagust, is why some people with beta-amyloid deposits are better at using different parts of their brain than others. Previous studies suggest that people who engage in mentally stimulating activities throughout their lives have lower levels of beta-amyloid.

“I think it’s very possible that people who spend a lifetime involved in cognitively stimulating activity have brains that are better able to adapt to potential damage,” said Jagust.

Filed under beta amyloid brain activity cognitive function dementia alzheimer's disease neuroscience science

free counters