Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

167 notes

Daydreaming simulated by computer model
Scientists have created a virtual model of the brain that daydreams like humans do.
Researchers created the computer model based on the dynamics of brain cells and the many connections those cells make with their neighbors and with cells in other brain regions. They hope the model will help them understand why certain portions of the brain work together when a person daydreams or is mentally idle. This, in turn, may one day help doctors better diagnose and treat brain injuries.
“We can give our model lesions like those we see in stroke or brain cancer, disabling groups of virtual cells to see how brain function is affected,” said senior author Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology at Washington University School of Medicine in St. Louis. “We can also test ways to push the patterns of activity back to normal.”
The study is now available online in The Journal of Neuroscience. 
The model was developed and tested by scientists at Washington University School of Medicine in St. Louis, Universitat Pompeu Fabra in Barcelona, Spain, and several other European universities including ETH Zurich, Switzerland; University of Oxford, United Kingdom; Institute of Advanced Biomedical Technologies, Chieti, Italy; and University of Lausanne, Switzerland.
Scientists first recognized in the late 1990s and early 2000s that the brain stays busy even when it’s not engaged in mental tasks. Researchers have identified several “resting state” brain networks, which are groups of different brain regions that have activity levels that rise and fall in sync when the brain is at rest. They have also linked disruptions in  networks associated with brain injury and disease to cognitive problems in memory, attention, movement and speech.
The new model was developed to help scientists learn how the brain’s anatomical structure contributes to the creation and maintenance of resting state networks. The researchers began with a process for simulating small groups of neurons, including factors that decrease or increase the likelihood that a group of cells will send a signal.
“In a way, we treated small regions of the brain like cognitive units: not as individual cells but as groups of cells,” said Gustavo Deco, PhD, professor and head of the Computational Neuroscience Group in Barcelona. “The activity of these cognitive units sends out excitatory signals to the other units through anatomical connections. This makes the connected units more or less likely to synchronize their signals.”
Based on data from brain scans, researchers assembled 66 cognitive units in each hemisphere, and interconnected them in anatomical patterns similar to the connections present in the brain.
Scientists set up the model so that the individual units went through the signaling process at random low frequencies that had previously been observed in brain cells in culture and in recordings of resting brain activity.
Next, researchers let the model run, slowly changing the coupling, or the strength of the connections between units. At a specific coupling value, the interconnections between units sending impulses soon began to create coordinated patterns of activity.
“Even though we started the cognitive units with random low activity levels, the connections allowed the units to synchronize,” Deco said. “The spatial pattern of synchronization that we eventually observed approximates very well—about 70 percent—to the patterns we see in scans of resting human brains.”
Using the model to simulate 20 minutes of human brain activity took a cluster of powerful computers 26 hours. But researchers were able to simplify the mathematics to make it possible to run the model on a typical computer. 
“This simpler whole brain model allows us to test a number of different hypotheses on how the structural connections generate dynamics of brain function at rest and during tasks, and how brain damage affects brain dynamics and cognitive function,” Corbetta said.

Daydreaming simulated by computer model

Scientists have created a virtual model of the brain that daydreams like humans do.

Researchers created the computer model based on the dynamics of brain cells and the many connections those cells make with their neighbors and with cells in other brain regions. They hope the model will help them understand why certain portions of the brain work together when a person daydreams or is mentally idle. This, in turn, may one day help doctors better diagnose and treat brain injuries.

“We can give our model lesions like those we see in stroke or brain cancer, disabling groups of virtual cells to see how brain function is affected,” said senior author Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology at Washington University School of Medicine in St. Louis. “We can also test ways to push the patterns of activity back to normal.”

The study is now available online in The Journal of Neuroscience.

The model was developed and tested by scientists at Washington University School of Medicine in St. Louis, Universitat Pompeu Fabra in Barcelona, Spain, and several other European universities including ETH Zurich, Switzerland; University of Oxford, United Kingdom; Institute of Advanced Biomedical Technologies, Chieti, Italy; and University of Lausanne, Switzerland.

Scientists first recognized in the late 1990s and early 2000s that the brain stays busy even when it’s not engaged in mental tasks. Researchers have identified several “resting state” brain networks, which are groups of different brain regions that have activity levels that rise and fall in sync when the brain is at rest. They have also linked disruptions in  networks associated with brain injury and disease to cognitive problems in memory, attention, movement and speech.

The new model was developed to help scientists learn how the brain’s anatomical structure contributes to the creation and maintenance of resting state networks. The researchers began with a process for simulating small groups of neurons, including factors that decrease or increase the likelihood that a group of cells will send a signal.

“In a way, we treated small regions of the brain like cognitive units: not as individual cells but as groups of cells,” said Gustavo Deco, PhD, professor and head of the Computational Neuroscience Group in Barcelona. “The activity of these cognitive units sends out excitatory signals to the other units through anatomical connections. This makes the connected units more or less likely to synchronize their signals.”

Based on data from brain scans, researchers assembled 66 cognitive units in each hemisphere, and interconnected them in anatomical patterns similar to the connections present in the brain.

Scientists set up the model so that the individual units went through the signaling process at random low frequencies that had previously been observed in brain cells in culture and in recordings of resting brain activity.

Next, researchers let the model run, slowly changing the coupling, or the strength of the connections between units. At a specific coupling value, the interconnections between units sending impulses soon began to create coordinated patterns of activity.

“Even though we started the cognitive units with random low activity levels, the connections allowed the units to synchronize,” Deco said. “The spatial pattern of synchronization that we eventually observed approximates very well—about 70 percent—to the patterns we see in scans of resting human brains.”

Using the model to simulate 20 minutes of human brain activity took a cluster of powerful computers 26 hours. But researchers were able to simplify the mathematics to make it possible to run the model on a typical computer. 

“This simpler whole brain model allows us to test a number of different hypotheses on how the structural connections generate dynamics of brain function at rest and during tasks, and how brain damage affects brain dynamics and cognitive function,” Corbetta said.

Filed under daydreaming brain activity brain networks AI memory cognitive impairment neuroscience science

154 notes

A fundamental problem for brain mapping
Recent findings force scientists to rethink the rules of neuroimaging 
Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.
Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong “It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”
Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.
“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”
By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.
Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.
Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.
Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.
“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”
Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.
Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”
The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”
A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.
“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.
For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.
“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”
Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.
For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.
“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”
The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.
“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”
Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”
“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”
Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.
The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.
Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.
Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”
(Image: Saad Faruque, Flickr)

A fundamental problem for brain mapping

Recent findings force scientists to rethink the rules of neuroimaging

Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.

Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong

“It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”

Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.

“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”

By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.

Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.

Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.

Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.

“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”

Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.

Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”

The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”

A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.

“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.

For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.

“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”

Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.

For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.

“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”

The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.

“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”

Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”

“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”

Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.

The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.

Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.

Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”

(Image: Saad Faruque, Flickr)

Filed under brain mapping neuroimaging brain activity cognitive subtraction neuroscience science

44 notes

Brain and eye combined monitoring breakthrough could lead to fewer road accidents

Latest advances in capturing data on brain activity and eye movement are being combined to open up a host of ‘mindreading’ possibilities for the future. These include the potential development of a system that can detect when drivers are in danger of falling asleep at the wheel.

image

The research has been undertaken at the University of Leicester with funding from the Engineering and Physical Sciences Research Council (EPSRC), and in collaboration with the University of Buenos Aires in Argentina.

The breakthrough involves bringing two recent developments in the world of technology together: high-speed eye tracking that records eye movements in unprecedented detail using cutting-edge infra-red cameras*; and high-density electroencephalograph** (EEG) technology that measures electrical brain activity with millisecond precision through electrodes placed on the scalp.

The research has overcome previous technological challenges which made it difficult to monitor eye movement and brain activity simultaneously. The team has done this by developing novel signal processing techniques.

This could be the first step towards a system that combines brain and eye monitoring to automatically alert drivers who are showing signs of drowsiness. The system would be built into the vehicle and connected unobtrusively to the driver, with the EEG looking out for brain signals that only occur in the early stages of sleepiness. The eye tracker would reinforce this by looking for erratic gaze patterns symptomatic of someone starting to feel drowsy and different from those characteristic of someone driving who is constantly looking out for hazards. Fatigue has been estimated to account for around 20 per cent of traffic accidents on the UK’s motorways.***

The breakthrough achieved by the University of Leicester could also ultimately be built on to deliver many other everyday applications in the years ahead. For example:

  • Computer games of the future could dispense with the need for the player to physically interact with any type of console, mouse or other hand-operated system. Instead, eye movement and brain activity data would be collected and processed to indicate what action the player wants to take. By distinguishing the tiny differences in various types of brain activity, the EEG would identify the precise action the player desires (e.g. run, jump or throw), while the eye movement data would show exactly where on the screen the player was looking when they had this thought. This information could be combined to enable the correct action to occur. An unobtrusive headset would be all that would be required to capture the necessary data.
  • People who have no arm functionality could move their wheelchairs simply through their eye movements. These movements could be tracked and the corresponding brain activity analysed to identify when these indicate a desire to move in a certain direction. This would then automatically activate a steering and propulsion mechanism that would drive the wheelchair to that place.
  • The breakthrough could also provide the basis for improved tests to diagnose dyslexia and other reading disorders. Current tests revolve around a rapid succession of single words flashed onto a computer screen, with the resulting brain activity monitored by EEG. The new technique could enable the person being tested to move their eyes and read longer passages of text in a natural way, making the tests much more realistic and revealing.
  • With the basic concept now demonstrated successfully, the team aim to continue their work and eventually develop software that, in real time, automatically monitors both eye movement and brain activity.
  • Dr Matias Ison, who has led the research, says: “Historically, eye-tracking and EEG have evolved as independent fields. We have managed to overcome the challenges that were standing in the way of integrating these technologies. This is already leading to a much better understanding of how the brain responds when the eyes are moving. Monitoring the alertness of drivers is just one of many potential applications for this work. Building on the foundation provided by our EPSRC-funded project, we hope to see the first of these starting to become feasible within the next three to five years.”

(Source: epsrc.ac.uk)

Filed under brain activity eye movement eye tracking EEG neuroscience science

137 notes

With Parents’ Help, Preschoolers Can Learn to Pay Attention

Pay attention! Whether it’s listening to a teacher giving instructions or completing a word problem, the ability to tune out distractions and focus on a task is key to academic success. Now, a new study suggests that a brief training program in attention for 3- to 5-year-olds and their families could help boost brain activity and narrow the academic achievement gap between low- and high-income students.
Children from families of low socioeconomic status generally score lower than more affluent kids on standardized tests of intelligence, language, spatial reasoning, and math, says Priti Shah, a cognitive neuroscientist at the University of Wisconsin who was not involved in the study. “That’s just a plain fact.” A more controversial question that scientists and politicians have batted around for decades, says Shah, is “What is the source of that difference?” Part of it may be genetic, but environmental factors, ranging from prenatal nutrition to exposure to toxic substances like lead, may also account for the early childhood differences in cognitive ability that appear by age 3 or 4. So far, however, “there aren’t that many randomized, controlled trials that show that the environment has an impact on a child’s abilities,” Shah says.
The new study does just that. It focuses on the ability to hone in on a task and ignore distractions, which “leverages every single thing we do,” says cognitive neuroscientist Helen Neville at the University of Oregon, Eugene. For more than 30 years, Neville and her colleagues have been studying the neural bases of this ability, called selective attention.
A classic example of selective attention is the "cocktail party" problem, where we must ignore other voices while listening to one person’s story. When an adult does that, “you get a little blip” in their brain activity, she says—a microvolt of electricity lasting a 10th of a second that can be picked up with EEG electrodes on the scalp. Children of higher socioeconomic status show a similar brain response to adults, whereas children from lower-income families generally show a much reduced response or none at all, Neville says.
Programs designed to improve cognitive skills such as selective attention are often costly and time-intensive, and don’t address how a child’s caretakers and home environment can reinforce those skills, Neville says. To determine whether a short, relatively inexpensive family-based training program could generate improvements, Neville and colleagues recruited 141 3- to 5-year olds in Oregon who were in Head Start—a preschool program for children whose families live at or below the poverty line —and randomly divided them into three groups.
For 8 weeks, children in the first group spent about an hour every week playing games and doing activities that require focused attention. Some tasks were simple, like coloring inside the lines, while others were more complex. In one game, for example, children were asked to deliver a small dish of water to a frog, walking only along a narrow ribbon, says Eric Pakulak, a study co-author. Other children might play in the periphery with balloons to ramp up the challenge, he says. In addition, “We also talk about what it means to be paying attention, and how to notice that you’re distracted.”
While the students played, parents or caregivers took 2-hour-long weekly classes on parenting that included general strategies for reducing family stress, such as creating consistent home routines, as well as activities specifically directed at boosting attention similar to those used in class that they could play with their children—one activity, for example, was to match words such as “happy” or “sad” to pictures of different facial expressions. In the second group, students performed the attention-boosting activities as well, but parents received only three 90-minute sessions of instruction and did not have an opportunity to learn the curriculum in depth; in the third group, neither kids nor their parents did anything special.
After 8 weeks, the team applied a battery of standard assessments, such as IQ and spatial reasoning tests and behavioral reports from teachers and parents; they also measured changes in brain activity while students listened to two recorded stories simultaneously. Instructed to attend to only one of two competing stories—”The Blue Kangaroo” vs. “Harry the Dog,” for example—the children whose parents had received additional attention instruction showed a 50 percent increase in brain activity in response to the correct story compared to children in the other two groups, the authors report online today in the Proceedings of the National Academy of Sciences; their responses matched those seen in adults and children of higher socioeconomic status. In addition, the children on average showed a roughly 7-point IQ increase, and teachers and parents reported significant improvements in academic performance and behavior. No such differences were evident in the two controls, Neville says, suggesting that parental involvement was key.
Many existing programs try to help young children of low socioeconomic status develop the skills needed to thrive in school, but “almost all happen without any scientifically designed pre-vs. post-behavioral or neural measures,” says Rajeev Raizada, a cognitive neuroscientist at the University of Rochester in New York. This study is one of the first to combine such tests with an intervention, he says. Such interventions “are of great interest scientifically, because they are about as close as you can get to experimental research on the effects of child poverty on the brain,” says Martha Farah, a cognitive neuroscientist at the University of Pennsylvania.
Raizada cautions that the parental training program was broad, making it hard to know which aspects were really crucial, he says. “Another crucial question is how long-lasting will the kids’ gains be?” he adds. “A common feature of intervention programs is that they tend to produce some immediate gains, but those gains often tend to fade out over subsequent months.”
Before implementing programs based on the new study, Farah says, “we need to invest in replication, fine-tuning, and all the hard work of bringing a program to scale.” Still, given striking improvements seen in just 8 weekly sessions, “I think that we need to regard these results as wonderful news,” she says.

With Parents’ Help, Preschoolers Can Learn to Pay Attention

Pay attention! Whether it’s listening to a teacher giving instructions or completing a word problem, the ability to tune out distractions and focus on a task is key to academic success. Now, a new study suggests that a brief training program in attention for 3- to 5-year-olds and their families could help boost brain activity and narrow the academic achievement gap between low- and high-income students.

Children from families of low socioeconomic status generally score lower than more affluent kids on standardized tests of intelligence, language, spatial reasoning, and math, says Priti Shah, a cognitive neuroscientist at the University of Wisconsin who was not involved in the study. “That’s just a plain fact.” A more controversial question that scientists and politicians have batted around for decades, says Shah, is “What is the source of that difference?” Part of it may be genetic, but environmental factors, ranging from prenatal nutrition to exposure to toxic substances like lead, may also account for the early childhood differences in cognitive ability that appear by age 3 or 4. So far, however, “there aren’t that many randomized, controlled trials that show that the environment has an impact on a child’s abilities,” Shah says.

The new study does just that. It focuses on the ability to hone in on a task and ignore distractions, which “leverages every single thing we do,” says cognitive neuroscientist Helen Neville at the University of Oregon, Eugene. For more than 30 years, Neville and her colleagues have been studying the neural bases of this ability, called selective attention.

A classic example of selective attention is the "cocktail party" problem, where we must ignore other voices while listening to one person’s story. When an adult does that, “you get a little blip” in their brain activity, she says—a microvolt of electricity lasting a 10th of a second that can be picked up with EEG electrodes on the scalp. Children of higher socioeconomic status show a similar brain response to adults, whereas children from lower-income families generally show a much reduced response or none at all, Neville says.

Programs designed to improve cognitive skills such as selective attention are often costly and time-intensive, and don’t address how a child’s caretakers and home environment can reinforce those skills, Neville says. To determine whether a short, relatively inexpensive family-based training program could generate improvements, Neville and colleagues recruited 141 3- to 5-year olds in Oregon who were in Head Start—a preschool program for children whose families live at or below the poverty line —and randomly divided them into three groups.

For 8 weeks, children in the first group spent about an hour every week playing games and doing activities that require focused attention. Some tasks were simple, like coloring inside the lines, while others were more complex. In one game, for example, children were asked to deliver a small dish of water to a frog, walking only along a narrow ribbon, says Eric Pakulak, a study co-author. Other children might play in the periphery with balloons to ramp up the challenge, he says. In addition, “We also talk about what it means to be paying attention, and how to notice that you’re distracted.”

While the students played, parents or caregivers took 2-hour-long weekly classes on parenting that included general strategies for reducing family stress, such as creating consistent home routines, as well as activities specifically directed at boosting attention similar to those used in class that they could play with their children—one activity, for example, was to match words such as “happy” or “sad” to pictures of different facial expressions. In the second group, students performed the attention-boosting activities as well, but parents received only three 90-minute sessions of instruction and did not have an opportunity to learn the curriculum in depth; in the third group, neither kids nor their parents did anything special.

After 8 weeks, the team applied a battery of standard assessments, such as IQ and spatial reasoning tests and behavioral reports from teachers and parents; they also measured changes in brain activity while students listened to two recorded stories simultaneously. Instructed to attend to only one of two competing stories—”The Blue Kangaroo” vs. “Harry the Dog,” for example—the children whose parents had received additional attention instruction showed a 50 percent increase in brain activity in response to the correct story compared to children in the other two groups, the authors report online today in the Proceedings of the National Academy of Sciences; their responses matched those seen in adults and children of higher socioeconomic status. In addition, the children on average showed a roughly 7-point IQ increase, and teachers and parents reported significant improvements in academic performance and behavior. No such differences were evident in the two controls, Neville says, suggesting that parental involvement was key.

Many existing programs try to help young children of low socioeconomic status develop the skills needed to thrive in school, but “almost all happen without any scientifically designed pre-vs. post-behavioral or neural measures,” says Rajeev Raizada, a cognitive neuroscientist at the University of Rochester in New York. This study is one of the first to combine such tests with an intervention, he says. Such interventions “are of great interest scientifically, because they are about as close as you can get to experimental research on the effects of child poverty on the brain,” says Martha Farah, a cognitive neuroscientist at the University of Pennsylvania.

Raizada cautions that the parental training program was broad, making it hard to know which aspects were really crucial, he says. “Another crucial question is how long-lasting will the kids’ gains be?” he adds. “A common feature of intervention programs is that they tend to produce some immediate gains, but those gains often tend to fade out over subsequent months.”

Before implementing programs based on the new study, Farah says, “we need to invest in replication, fine-tuning, and all the hard work of bringing a program to scale.” Still, given striking improvements seen in just 8 weekly sessions, “I think that we need to regard these results as wonderful news,” she says.

Filed under preschoolers attention brain activity socioeconomic status psychology neuroscience science

119 notes

Brain differences seen in depressed preschoolers

A key brain structure that regulates emotions works differently in preschoolers with depression compared with their healthy peers, according to new research at Washington University School of Medicine in St. Louis.

The differences, measured using functional magnetic resonance imaging (fMRI), provide the earliest evidence yet of changes in brain function in young children with depression. The researchers say the findings could lead to ways to identify and treat depressed children earlier in the course of the illness, potentially preventing problems later in life.

image

“The findings really hammer home that these kids are suffering from a very real disorder that requires treatment,” said lead author Michael S. Gaffrey, PhD. “We believe this study demonstrates that there are differences in the brains of these very young children and that they may mark the beginnings of a lifelong problem.”

The study is published in the July issue of the Journal of the American Academy of Child & Adolescent Psychiatry.

Depressed preschoolers had elevated activity in the brain’s amygdala, an almond-shaped set of neurons important in processing emotions. Earlier imaging studies identified similar changes in the amygdala region in adults, adolescents and older children with depression, but none had looked at preschoolers with depression.

For the new study, scientists from Washington University’s Early Emotional Development Program studied 54 children ages 4 to 6. Before the study began, 23 of those kids had been diagnosed with depression. The other 31 had not. None of the children in the study had taken antidepressant medication.

Although studies using fMRI to measure brain activity by monitoring blood flow have been used for years, this is the first time that such scans have been attempted in children this young with depression. Movements as small as a few millimeters can ruin fMRI data, so Gaffrey and his colleagues had the children participate in mock scans first. After practicing, the children in this study moved less than a millimeter on average during their actual scans.

While they were in the fMRI scanner during the study, the children looked at pictures of people whose facial expressions conveyed particular emotions. There were faces with happy, sad, fearful and neutral expressions.

“The amygdala region showed elevated activity when the depressed children viewed pictures of people’s faces,” said Gaffrey, an assistant professor of psychiatry. “We saw the same elevated activity, regardless of the type of faces the children were shown. So it wasn’t that they reacted only to sad faces or to happy faces, but every face they saw aroused activity in the amygdala.”

Looking at pictures of faces often is used in studies of adults and older children with depression to measure activity in the amygdala. But the observations in the depressed preschoolers were somewhat different than those previously seen in adults, where typically the amygdala responds more to negative expressions of emotion, such as sad or fearful faces, than to faces expressing happiness or no emotion.

In the preschoolers with depression, all facial expressions were associated with greater amygdala activity when compared with their healthy peers.

Gaffrey said it’s possible depression affects the amygdala mainly by exaggerating what, in other children, is a normal amygdala response to both positive and negative facial expressions of emotion. But more research will be needed to prove that. He does believe, however, that the amygdala’s reaction to people’s faces can be seen in a larger context.

“Not only did we find elevated amygdala activity during face viewing in children with depression, but that greater activity in the amygdala also was associated with parents reporting more sadness and emotion regulation difficulties in their children,” Gaffrey said. “Taken together, that suggests we may be seeing an exaggeration of a normal developmental response in the brain and that, hopefully, with proper prevention or treatment, we may be able to get these kids back on track.”

(Source: news.wustl.edu)

Filed under depression amygdala fMRI brain activity preschoolers face processing neuroscience science

166 notes

Patience reaps rewards
Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards
Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.
Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.
ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.
To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.
“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.
The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.
However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.
Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Patience reaps rewards

Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards

Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.

Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.

ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.

To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.

“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.

The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.

However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.

Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Filed under brain activity fMRI ADHD methylphenidate dopamine osmotic release oral system neuroscience science

134 notes

Past Brain Activation Revealed in Scans
Weizmann Institute scientists discover that spontaneously emerging brain activity patterns preserve traces of previous cognitive activity
What if experts could dig into the brain, like archaeologists, and uncover the history of past experiences? This ability might reveal what makes each of us a unique individual, and it could enable the objective diagnosis of a wide range of neuropsychological diseases. New research at the Weizmann Institute hints that such a scenario is within the realm of possibility: It shows that spontaneous waves of neuronal activity in the brain bear the imprints of earlier events for at least 24 hours after the experience has taken place.
The new research stems from earlier findings in the lab of Prof. Rafi Malach of the Institute’s Neurobiology Department and others that the brain never rests, even when its owner is resting. When a person is resting with closed eyes – that is, no visual stimulus is entering the brain – the normal bursts of nerve cell activity associated with incoming information are replaced by ultra-slow patterns of neuronal activity. Such spontaneous or “resting” waves travel in a highly organized and reproducible manner through the brain’s outer layer – the cortex – and the patterns they create are complex, yet periodic and symmetrical.
Like hieroglyphics, it seemed that these patterns might have some meaning, and research student Tal Harmelech, under the guidance of Malach and Dr. Son Preminger, set out to uncover their significance. Their idea was that the patterns of resting brain waves may constitute “archives” for earlier experiences. As we add new experiences, the activation of our brain’s networks lead to long-term changes in the links between brain cells, a facility referred to as plasticity. As our experiences become embedded in these connections, they create “expectations” that come into play before we perform any type of mental task, enabling us to anticipate the result. The researchers hypothesized that information about earlier experiences would thus be incorporated into the links between networks of nerve cells in the cortex, and these would show up in the brain’s spontaneously emerging wave patterns.
In the experiment, the researchers had volunteers undertake a training exercise that would strongly activate a well-defined network of nerve cells in the frontal lobes. While undergoing scans of their brain activity in the Institute’s functional magnetic resonance imaging (fMRI) scanner, the subjects were asked to imagine a situation in which they had to make rapid decisions. The subjects received auditory feedback in real time, based on the information obtained directly from their frontal lobe, which indicated the level of neuronal activity in the trained network. This “neurofeedback” strategy proved highly successful in activating the frontal network – a part of the brain that is notoriously difficult to activate under controlled conditions.
To test whether the connections created in the brain during this exercise would leave their traces in the patterns formed by the resting brain waves, the researchers performed fMRI scans on the resting subjects before the exercise, immediately afterward, and 24 hours later. Their findings, which appeared in the Journal of Neuroscience, showed that the activation of the specific areas in the cortex did indeed remodel the resting brain wave patterns. Surprisingly, the new patterns not only remained the next day, they were significantly strengthened. These observations fit in with the classic learning principles proposed by Donald Hebb in the mid-20th century, in which the co-activation of two linked nerve cells leads to long term strengthening of their link, while activity that is not coordinated weakens this link. The fMRI images of the resting brain waves showed that brain areas that were activated together during the training sessions exhibited an increase in their functional link a day after the training, while those areas that were deactivated by the training showed a weakened functional connectivity.
This research suggests a number of future possibilities for exploring the brain. For example, spontaneously emerging brain patterns could be used as a “mapping tool” for unearthing cognitive events from an individual’s recent past. Or, on a wider scale, each person’s unique spontaneously emerging activity patterns might eventually reveal a sort of personal profile – highlighting each individual’s abilities, shortcomings, biases, learning skills, etc. “Today, we are discovering more and more of the common principles of brain activity, but we have not been able to account for the differences between individuals,” says Malach. “In the future, spontaneous brain patterns could be the key to obtaining unbiased individual profiles.” Such profiles could be especially useful in diagnosing or learning the brain pathologies associated with a wide array of cognitive disabilities.

Past Brain Activation Revealed in Scans

Weizmann Institute scientists discover that spontaneously emerging brain activity patterns preserve traces of previous cognitive activity

What if experts could dig into the brain, like archaeologists, and uncover the history of past experiences? This ability might reveal what makes each of us a unique individual, and it could enable the objective diagnosis of a wide range of neuropsychological diseases. New research at the Weizmann Institute hints that such a scenario is within the realm of possibility: It shows that spontaneous waves of neuronal activity in the brain bear the imprints of earlier events for at least 24 hours after the experience has taken place.

The new research stems from earlier findings in the lab of Prof. Rafi Malach of the Institute’s Neurobiology Department and others that the brain never rests, even when its owner is resting. When a person is resting with closed eyes – that is, no visual stimulus is entering the brain – the normal bursts of nerve cell activity associated with incoming information are replaced by ultra-slow patterns of neuronal activity. Such spontaneous or “resting” waves travel in a highly organized and reproducible manner through the brain’s outer layer – the cortex – and the patterns they create are complex, yet periodic and symmetrical.

Like hieroglyphics, it seemed that these patterns might have some meaning, and research student Tal Harmelech, under the guidance of Malach and Dr. Son Preminger, set out to uncover their significance. Their idea was that the patterns of resting brain waves may constitute “archives” for earlier experiences. As we add new experiences, the activation of our brain’s networks lead to long-term changes in the links between brain cells, a facility referred to as plasticity. As our experiences become embedded in these connections, they create “expectations” that come into play before we perform any type of mental task, enabling us to anticipate the result. The researchers hypothesized that information about earlier experiences would thus be incorporated into the links between networks of nerve cells in the cortex, and these would show up in the brain’s spontaneously emerging wave patterns.

In the experiment, the researchers had volunteers undertake a training exercise that would strongly activate a well-defined network of nerve cells in the frontal lobes. While undergoing scans of their brain activity in the Institute’s functional magnetic resonance imaging (fMRI) scanner, the subjects were asked to imagine a situation in which they had to make rapid decisions. The subjects received auditory feedback in real time, based on the information obtained directly from their frontal lobe, which indicated the level of neuronal activity in the trained network. This “neurofeedback” strategy proved highly successful in activating the frontal network – a part of the brain that is notoriously difficult to activate under controlled conditions.

To test whether the connections created in the brain during this exercise would leave their traces in the patterns formed by the resting brain waves, the researchers performed fMRI scans on the resting subjects before the exercise, immediately afterward, and 24 hours later. Their findings, which appeared in the Journal of Neuroscience, showed that the activation of the specific areas in the cortex did indeed remodel the resting brain wave patterns. Surprisingly, the new patterns not only remained the next day, they were significantly strengthened. These observations fit in with the classic learning principles proposed by Donald Hebb in the mid-20th century, in which the co-activation of two linked nerve cells leads to long term strengthening of their link, while activity that is not coordinated weakens this link. The fMRI images of the resting brain waves showed that brain areas that were activated together during the training sessions exhibited an increase in their functional link a day after the training, while those areas that were deactivated by the training showed a weakened functional connectivity.

This research suggests a number of future possibilities for exploring the brain. For example, spontaneously emerging brain patterns could be used as a “mapping tool” for unearthing cognitive events from an individual’s recent past. Or, on a wider scale, each person’s unique spontaneously emerging activity patterns might eventually reveal a sort of personal profile – highlighting each individual’s abilities, shortcomings, biases, learning skills, etc. “Today, we are discovering more and more of the common principles of brain activity, but we have not been able to account for the differences between individuals,” says Malach. “In the future, spontaneous brain patterns could be the key to obtaining unbiased individual profiles.” Such profiles could be especially useful in diagnosing or learning the brain pathologies associated with a wide array of cognitive disabilities.

Filed under brain mapping brain activity cognitive function Hebbian learning neuroimaging plasticity neuroscience science

172 notes

Researchers Identify Emotions Based on Brain Activity
For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.
The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.
Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”
“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”
One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.
“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.
For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.
Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.
To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.
Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs.  The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.
Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.
“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.
“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.
The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.
Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”
In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.

Researchers Identify Emotions Based on Brain Activity

For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.

The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.

Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”

“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”

One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.

“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.

For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.

Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.

To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.

Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs.  The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.

Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.

“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.

A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.

“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.

The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.

Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”

In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.

Filed under brain activity emotions machine learning fMRI neural activity neuroscience psychology science

117 notes

Weight Loss Improves Memory and Alters Brain Activity in Overweight Women

Memory improves in older, overweight women after they lose weight by dieting, and their brain activity actually changes in the regions of the brain that are important for memory tasks, a new study finds. The results were presented at The Endocrine Society’s 95th Annual Meeting in San Francisco.

image

(Image: Corbis)

“Our findings suggest that obesity-associated impairments in memory function are reversible, adding incentive for weight loss,” said lead author Andreas Pettersson, MD, a PhD student at Umea University, Umea, Sweden.

Previous research has shown that obese people have impaired episodic memory, the memory of events that happen throughout one’s life.

Pettersson and co-workers performed their study to determine whether weight loss would improve memory and whether improved memory correlated with changes in relevant brain activity. A special type of brain imaging called functional magnetic resonance imaging (functional MRI) allowed them to see brain activity while the subjects performed a memory test.

The researchers randomly assigned 20 overweight, postmenopausal women (average age, 61) to one of two healthy weight loss diets for six months. Nine women used the Paleolithic diet, also called the Caveman diet, which was composed of 30 percent protein; 30 percent carbohydrates, or “carbs”; and 40 percent unsaturated fats. The other 11 women followed the Nordic Nutrition Recommendations of a diet containing 15 percent protein, 55 percent carbs and 30 percent fats.

Before and after the diet, the investigators measured the women’s body mass index (BMI, a measure of weight and height) and body fat composition. They also tested the subjects’ episodic memory by instructing them to memorize unknown pairs of faces and names presented on a screen during functional MRI. The name for this process of creating new memory is “encoding.” Later, the women again saw the facial images along with three letters. Their memory retrieval task, during functional MRI, was to indicate the correct letter that corresponded to the first letter of the name linked to the face.

Because the two dietary groups did not differ in body measurements and functional MRI data, their data were combined and analyzed as one group. The group’s average BMI decreased from 32.1 before the diet to 29.2 (below the cutoff for obesity) after six months of dieting, and their average weight dropped from 188.9 pounds (85 kilograms) to 171.3 pounds (77.1 kilograms), the authors reported. This study was part of a larger, diet-focused study funded by the Swedish Research Council and the Swedish Heart-Lung Foundation.

Memory performance improved after weight loss, and Pettersson said the brain-activity pattern during memory testing reflected this improvement. After weight loss, brain activity reportedly increased during memory encoding in the brain regions that are important for identification and matching of faces. In addition, brain activity decreased after weight loss in the regions that are associated with retrieval of episodic memories, which Pettersson said indicates more efficient retrieval.

“The altered brain activity after weight loss suggests that the brain becomes more active while storing new memories and therefore needs fewer brain resources to recollect stored information,” he said.

(Source: newswise.com)

Filed under brain activity memory weight loss obesity women fMRI neuroscience science

216 notes

Inside the Letterbox: How Literacy Transforms the Human Brain

Although I find the diversity of the world’s writing systems bewildering, there is also a striking regularity that remains hidden. Whenever we read—whether our language is Japanese, Hebrew, English, or Italian—each of us relies on very similar brain networks. In particular, a small region of the visual cortex becomes active with remarkable reproducibility in the brains of all readers. A brief localizer scan, during which images of brain activity are collected as a person responds to written words, faces, objects, and other visual stimuli, serves to identify this region. Written words never fail to activate a small region at the base of the left hemisphere, always at the same place, give or take a few millimeters.
Experts call this region the visual word form area, but in a recent book for the general public, I dubbed it the brain’s letterbox, because it concentrates much of our visual knowledge of letters and their configurations. Indeed, this site is amazingly specialized. The letterbox responds to written words more than it does to most other categories of visual stimuli, including pictures of faces, objects, houses, and even Arabic numerals.Its efficiency is so great that it even responds to words that we fail to recognize consciously—words made subliminal by flashing them for a fraction of a second. Yet it performs highly sophisticated operations that are indispensable to fluent reading. For instance, the letterbox is the first visual area that recognizes that “READ” and “read” depict the same word by representing strings of letters invariantly for changes in case, which is no small feat if you consider that uppercase and lowercase letters such as “A” and “a” bear very little similarity. Furthermore, if it is impaired or disconnected via brain surgery or a cerebral infarct (type of stroke), the patient may develop a syndrome called pure alexia. He or she will be unable to recognize even a single word, as well as faces, objects, digits, and Arabic numerals. Yet many of these patients can still speak and understand spoken language fluently, and they may even still write; only their visual capacity to process letter strings seems dramatically affected.
The brain of any educated adult contains a circuit specialized for reading. But how is this possible, given that reading is an extremely recent and highly variable cultural activity? The alphabet is only about 4,000 years old, and until recently, only a very small fraction of humanity could read. Thus, there was no time for Darwinian evolution to shape our genome and adapt our brain networks to the particularities of reading. How is it, then, that we all possess a specialized letterbox area?

Read more

Inside the Letterbox: How Literacy Transforms the Human Brain

Although I find the diversity of the world’s writing systems bewildering, there is also a striking regularity that remains hidden. Whenever we read—whether our language is Japanese, Hebrew, English, or Italian—each of us relies on very similar brain networks. In particular, a small region of the visual cortex becomes active with remarkable reproducibility in the brains of all readers. A brief localizer scan, during which images of brain activity are collected as a person responds to written words, faces, objects, and other visual stimuli, serves to identify this region. Written words never fail to activate a small region at the base of the left hemisphere, always at the same place, give or take a few millimeters.

Experts call this region the visual word form area, but in a recent book for the general public, I dubbed it the brain’s letterbox, because it concentrates much of our visual knowledge of letters and their configurations. Indeed, this site is amazingly specialized. The letterbox responds to written words more than it does to most other categories of visual stimuli, including pictures of faces, objects, houses, and even Arabic numerals.Its efficiency is so great that it even responds to words that we fail to recognize consciously—words made subliminal by flashing them for a fraction of a second. Yet it performs highly sophisticated operations that are indispensable to fluent reading. For instance, the letterbox is the first visual area that recognizes that “READ” and “read” depict the same word by representing strings of letters invariantly for changes in case, which is no small feat if you consider that uppercase and lowercase letters such as “A” and “a” bear very little similarity. Furthermore, if it is impaired or disconnected via brain surgery or a cerebral infarct (type of stroke), the patient may develop a syndrome called pure alexia. He or she will be unable to recognize even a single word, as well as faces, objects, digits, and Arabic numerals. Yet many of these patients can still speak and understand spoken language fluently, and they may even still write; only their visual capacity to process letter strings seems dramatically affected.

The brain of any educated adult contains a circuit specialized for reading. But how is this possible, given that reading is an extremely recent and highly variable cultural activity? The alphabet is only about 4,000 years old, and until recently, only a very small fraction of humanity could read. Thus, there was no time for Darwinian evolution to shape our genome and adapt our brain networks to the particularities of reading. How is it, then, that we all possess a specialized letterbox area?

Read more

Filed under letterbox visual stimuli brain activity brain circuitry psychology neuroscience science

free counters