Posts tagged brain activity

Posts tagged brain activity
Brain biology tied to social reorientation during entry to adolescence
A specific region of the brain is in play when children consider their identity and social status as they transition into adolescence — that often-turbulent time of reaching puberty and entering middle school, says a University of Oregon psychologist.
In a study of 27 neurologically typical children who underwent functional magnetic resonance imaging (fMRI) at ages 10 and 13, activity in the brain’s ventromedial prefrontal cortex increased dramatically when the subjects responded to questions about how they view themselves.
The findings, published in the April 24 issue of the Journal of Neuroscience, confirm previous findings that specific brain networks support self-evaluations in the growing brain, but, more importantly, provide evidence that basic biology may well drive some of these changes, says Jennifer H. Pfeifer, professor of psychology and director of the psychology department’s Developmental Social Neuroscience Lab.
"This is a longitudinal fMRI study, which is still relatively uncommon," Pfeifer said. "It suggests a link between neural responses during self-evaluative processing in the social domain, and pubertal development. This provides a rare piece of empirical evidence in humans, rather than animal models, that supports the common theory that adolescents are biologically driven to go through a social reorientation."
Participants were scanned for about seven minutes at each visit. They responded to a series of attributes tied to social or academic domains — social ones such as “I am popular” or “I wish I had more friends” and academic ones such as “I like to read just for fun” or “Writing is so boring.” Social and academic evaluations were made about both the self and a familiar fictional character, Harry Potter.
In previous research, Pfeifer had found that a more dorsal region of the medial prefrontal cortex was more responsive in 10-year-old children during self-evaluations, when they were compared to adults. The new study, she said, provides a more detailed picture of how the brain supports self-development by looking at change within individuals.
The fMRI analyses found it was primarily the social self-evaluations that triggered significant increases over time in blood-oxygen levels, which fMRI detects, in the ventral medial prefrontal cortex. Additionally, these increases were strongest in children who experienced the most pubertal development over the three-year study period, for both girls and boys. Increases during academic self-evaluations were at best marginal. Whole-brain analyses found no other areas of the brain had significant increases or decreases in activity related to pubertal development.
"Neural changes in the social domain were more robust," Pfeifer said. "Increased responses in this one region of the brain from age 10 to 13 were very evident in social self-evaluations, but not academic ones. This pattern is consistent with the enormous importance that most children entering adolescence place on their peer relationships and social status, compared to the relatively diminished value often associated with academics during this transition."
In youth with autism spectrum disorders, this specialized response in ventral medial prefrontal cortex is missing, she added, citing a paper she co-authored in the February 2013 issue of the Journal of Autism and Developmental Disorders and a complementary study led by Michael V. Lombardo, University of Cambridge, in the February 2010 issue of the journal Brain. The absence of this typical effect, Pfeifer said, might be related to the challenges these individuals often face in both self-understanding and social relations.
"Dr. Pfeifer’s research examining self-evaluations during adolescence adds significantly to the intricate puzzle of this turbulent age period," said Kimberly Andrews Espy, vice president for research and innovation and dean of the graduate school. "Researchers at the University of Oregon are piecing together how both biology and the environment dynamically and interactively support healthy social development."
A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects such as animals, or even the areas governing abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.
“Our results show that our brains are much more dynamic than previously thought, rapidly reallocating resources based on behavioral demands, and optimizing our performance by increasing the precision with which we can perform relevant tasks,” said Tolga Cukur, a postdoctoral researcher in neuroscience at UC Berkeley and lead author of the study published today (Sunday April 21) in the journal Nature Neuroscience.
“As you plan your day at work, for example, more of the brain is devoted to processing time, tasks, goals and rewards, and as you search for your cat, more of the brain becomes involved in recognition of animals,” he added.
The findings help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.
These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.
The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become attuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.
“These changes occur across many brain regions, not only those devoted to vision. In fact, the largest changes are seen in the prefrontal cortex, which is usually thought to be involved in abstract thought, long-term planning, and other complex mental tasks,” Cukur said.
The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of the visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.
The latest study goes further to show how the brain’s semantic space is warped during a visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto. Funding for the research was provided by the National Eye Institute of the National Institutes of Health.
We present an efficient approach to discriminate between typical and atypical brains from macroscopic neural dynamics recorded as magnetoencephalograms (MEG). Our approach is based on the fact that spontaneous brain activity can be accurately described with stochastic dynamics, as a multivariate Ornstein-Uhlenbeck process (mOUP). By fitting the data to a mOUP we obtain: 1) the functional connectivity matrix, corresponding to the drift operator, and 2) the traces of background stochastic activity (noise) driving the brain. We applied this method to investigate functional connectivity and background noise in juvenile patients (n = 9) with Asperger’s syndrome, a form of autism spectrum disorder (ASD), and compared them to age-matched juvenile control subjects (n = 10). Our analysis reveals significant alterations in both functional brain connectivity and background noise in ASD patients. The dominant connectivity change in ASD relative to control shows enhanced functional excitation from occipital to frontal areas along a parasagittal axis. Background noise in ASD patients is spatially correlated over wide areas, as opposed to control, where areas driven by correlated noise form smaller patches. An analysis of the spatial complexity reveals that it is significantly lower in ASD subjects. Although the detailed physiological mechanisms underlying these alterations cannot be determined from macroscopic brain recordings, we speculate that enhanced occipital-frontal excitation may result from changes in white matter density in ASD, as suggested in previous studies. We also venture that long-range spatial correlations in the background noise may result from less specificity (or more promiscuity) of thalamo-cortical projections. All the calculations involved in our analysis are highly efficient and outperform other algorithms to discriminate typical and atypical brains with a comparable level of accuracy. Altogether our results demonstrate a promising potential of our approach as an efficient biomarker for altered brain dynamics associated with a cognitive phenotype.
Babies develop conscious perception from five months of age
Infants develop the ability to consciously process their environment as early as five months of age, according to a study published in the journal Science.
The team of French and Danish researchers, led by neuroscientist Sid Kouider, discovered a signal in the nervous system of infants that reliably identifies the beginning of visual consciousness, or the ability to see something and recall that you have seen it.
The team set out believing infants had the capacity for conscious reflection, but they had to overcome the barrier that babies could not report their thoughts.
They used electroencephalography (EEG) to record electrical activity in the brains of 80 infants aged five, 12 and 15 months while they were shown pictures of faces and random patterns for a fraction of a second.
When adults are aware of a stimulus, their brains show a two-stage pattern of activity. When they see a moving object, the sensors in the vision centre of the brain activate with a spike of activity.
The signal then moves from the back of the brain to the prefrontal cortex, which deals with higher-level cognition. This is known as the late slow wave.
Conscious awareness begins after the second stage of neural activity reaches a specific threshold.
The new study found this two-stage pattern of brain activity was present in the three groups of infants, though it was weaker and more drawn out in the five-month-olds.
The researchers say neurological markers of visual consciousness may help paediatricians better manage infant pain and anaesthesia.
But they note the research does not provide direct proof of consciousness. “Indeed, it is a genuine philosophical problem whether such a proof can ever be obtained from purely neurophsysiological data,” the paper said.
Professor Louise Newman, Director of the Centre for Developmental Psychiatry & Psychology at Monash University, said the study was novel in its ability to measure the way very young brains register stimuli.
But five months should not be seen as a fixed point at which infants start to process information, she said.
“Although this group has studied five months and up, my suspicion would be that if we had different techniques, young infants – from birth on – would show the capacity of registering these sorts of stimuli.
“Infants are born with quite sophisticated capacities to observe, respond to and interact with the environment, particularly the social environment,” she said.
“Very soon after birth, infants will maintain gaze with their parents or parent: they’ve got quite sophisticated visual tracking capacity from an early age.”
Professor Newman, who has undertaken behavioural studies in two- to four-month olds, said young infant brains were extremely sensitive to their mother’s emotional reaction.
“They learn that ‘if I do this, or if I smile or signal in this way, this is what usually happens’. If you manipulate that so they don’t get that response, they’re very sensitive to that and they show signs that it’s very aversive to them.”

Increased brain activity predicts future onset of substance use
Do people get caught in the cycle of overeating and drug addiction because their brain reward centers are over-active causing them to experience greater cravings for food or drugs? In a unique prospective study Oregon Research Institute (ORI) senior scientist Eric Stice, Ph.D., and colleagues tested this theory, called the reward surfeit model. The results indicated that elevated responsivity of reward regions in the brain increased the risk for future substance use, which has never been tested before prospectively with humans. Paradoxically, results also provide evidence that even a limited history of substance use was related to less responsivity in the reward circuitry, as has been suggested by experiments with animals. The research appears in the May 1, 2013 issue of Biological Psychiatry.
In a novel study using functional Magnetic Resonance Imaging (fMRI) Stice’s team tested whether individual differences in reward region responsivity predicted overweight/obesity onset among initially healthy weight adolescents and substance use onset among initially abstinent adolescents. The neural response to food and monetary reward was measured in 162 adolescents. Body fat and substance use were assessed at the time of the fMRI and again one year later.
"The findings are important because this is the first test of whether atypical responsivity of reward circuitry increases risk for substance use," says Dr. Stice. "Although numerous researchers have suggested that reduced responsivity is a vulnerability factor for substance use, this theory was based entirely on cross-sectional studies comparing substance abusing individuals to healthy controls; no studies have tested this thesis with prospective data."
Investigators examined the extent to which reward circuitry (e.g., the striatum) was activated in response to receipt and anticipated receipt of money. Monetary reward is a general reinforcer and has been used frequently to assess reward sensitivity. The team also used another paradigm to assess brain activation in response to the individual’s consumption and anticipated consumption of chocolate milkshake. Results showed that greater activation in the striatum during monetary reward receipt at baseline predicted future substance use onset over a 1-year follow-up.
Noteworthy was that adolescents who had already begun using substances showed less striatal response to monetary reward. This finding provides the first evidence that even a relatively short period of moderate substance use might reduce reward region responsivity to a general reinforcer.
"The implications are that the more individuals use psychoactive substances, the less responsive they will be to rewarding experiences, meaning that they may derive less reinforcement from other pursuits, such as interpersonal relationships, hobbies, and school work. This may contribute to the escalating spiral of drug use that characterizes substance use disorders," commented Stice.
Although the investigators had expected parallel neural predictors of future onset of overweight during exposure to receipt and anticipated receipt of a palatable food, no significant effects emerged. It is possible that these effects are weaker and that a longer follow-up period will be necessary to better differentiate who will gain weight and who will remain at a healthy weight.
(Image courtesy: West Virginia University)
Detecting Autism From Brain Activity
Neuroscientists from Case Western Reserve University School of Medicine and the University of Toronto have developed an efficient and reliable method of analyzing brain activity to detect autism in children. Their findings appear today in the online journal PLOS ONE.
The researchers recorded and analyzed dynamic patterns of brain activity with magnetoencephalography (MEG) to determine the brain’s functional connectivity – that is, its communication from one region to another. MEG measures magnetic fields generated by electrical currents in neurons of the brain.
Roberto Fernández Galán, PhD, an assistant professor of neurosciences at Case Western Reserve and an electrophysiologist seasoned in theoretical physics led the research team that detected autism spectrum disorder (ASD) with 94 percent accuracy. The new analytic method offers an efficient, quantitative way of confirming a clinical diagnosis of autism.
“We asked the question, ‘Can you distinguish an autistic brain from a non-autistic brain simply by looking at the patterns of neural activity?’ and indeed, you can,” Galán said. “This discovery opens the door to quantitative tools that complement the existing diagnostic tools for autism based on behavioral tests.”
In a study of 19 children—nine with ASD—141 sensors tracked the activity of each child’s cortex. The sensors recorded how different regions interacted with each other while at rest, and compared the brain’s interactions of the control group to those with ASD. Researchers found significantly stronger connections between rear and frontal areas of the brain in the ASD group; there was an asymmetrical flow of information to the frontal region, but not vice versa.
The new insight into the directionality of the connections may help identify anatomical abnormalities in ASD brains. Most current measures of functional connectivity do not indicate the interactions’ directionality.
“It is not just who is connected to whom, but rather who is driving whom,” Galán said.
Their approach also allows them to measure background noise, or the spontaneous input driving the brain’s activity while at rest. A spatial map of these inputs demonstrated there was more complexity and structure in the control group than the ASD group, which had less variety and intricacy. This feature offered better discrimination between the two groups, providing an even stronger measure of criteria than functional connectivity alone, with 94 percent accuracy.
Case Western Reserve’s Office of Technology Transfer has filed a provisional patent application for the analysis’ algorithm, which investigates the brain’s activity at rest. Galán and colleagues hope to collaborate with others in the autism field with emphasis on translational and clinical research.
(Image: SPL)
Brain scans are increasingly able to reveal whether or not you believe you remember some person or event in your life. In a new study presented at a cognitive neuroscience meeting today, researchers used fMRI brain scans to detect whether a person recognized scenes from their own lives, as captured in some 45,000 images by digital cameras. The study is seeking to test the capabilities and limits of brain-based technology for detecting memories, a technique being considered for use in legal settings.
“The advancement and falling costs of fMRI, EEG, and other techniques will one day make it more practical for this type of evidence to show up in court,” says Francis Shen of the University of Minnesota Law School, who is chairing a session on neuroscience and the law at a meeting of the Cognitive Neuroscience Society (CNS) in San Francisco this week. “But technological advancement on its own doesn’t necessarily lead to use in the law.” But as the technology has advanced and as the legal system desires to use more empirical evidence, neuroscience and the law are intersecting more often than in previous decades.
In U.S. courts, neuroscientific evidence has been used largely in cases involving brain injury litigation or questions of impaired ability. In some cases outside the United States, however, courts have used brain-based evidence to check whether a person has memories of legally relevant events, such as a crime. New companies also are claiming to use brain scans to detect lies – although judges have not yet admitted this evidence in U.S. courts. These developments have rallied some in the neuroscience community to take a critical look at the promise and perils of such technology in addressing legal questions – working in partnership with legal scholars through efforts such as the MacArthur Foundation Research Network on Law and Neuroscience.
Recognizing your own memories
What inspired Anthony Wagner, a cognitive neuroscientist at Stanford University, to test fMRI uses for memory detection was a case in June 2008 in Mumbai, India, in which a judge cited EEG evidence as indicating that a murder suspect held knowledge about the crime that only the killer could possess. “It appeared that the brain data held considerable sway,” says Wagner, who points out that the methods used in that case have not been subject to extensive peer review.
Since then, Wagner and colleagues have conducted a number of experiments to test whether brain scans can be used to discriminate between stimuli that people perceive as old or new, as well as more objectively, whether or not they have previously encountered a particular person, place, or thing. To date, Wagner and colleagues have had success in the lab using fMRI-based analyses to determine whether someone recognizes a person or perceives them as unfamiliar, but not in determining whether in fact they have actually seen them before.
In a new study presented today, his team sought to take the experiments out of the lab and into the real world by outfitting participants with digital cameras around their necks that automatically took photos of the participants’ everyday experiences. Over a multi-week period, the cameras yielded 45,000 photos per participant.
Wagner’s team then took brief photo sequences of individual events from the participants’ lives and showed them to the participants in the fMRI scanner, along with photo sequences from other subjects as the control stimuli. The researchers analyzed their brain patterns to determine whether or not the participants were recognizing the sequences as their own. “We did quite well with most subjects, with a mean accuracy of 91% in discriminating between event sequences that the participant recognized as old and those that the participant perceived as unfamiliar, ” Wagner says. “These findings indicate that distributed patterns of brain activity, as measured with fMRI, carry considerable information about an individual’s subjective memory experience – that is, whether or not they are remembering the event.”
In another new study, Wagner and colleagues tested whether people can “beat the technology” by using countermeasures to alter their brain patterns. Back in the lab, the researchers showed participants individual faces and later asked them whether the faces were old or new. “Halfway through the memory test, we stopped and told them ‘What we are actually trying to do is read out from your brain patterns whether or not you are recognizing the face or perceiving it as novel, and we’ve been successful with other subjects in doing this in the past. Now we want you to try to beat the system by altering your neural responses.’” The researchers instructed the participants to think about a familiar person or experience when presented with a new face, and to focus on a novel feature of the face when presented a previously encountered face.
“In the first half of the test, during which participants were just making memory decisions, we were well above chance in decoding from brain patterns whether they recognized face or perceived it as novel. However, in the second half of the test, we were unable to classify whether or not they recognized the face nor whether the face was objectively old or new,” Wagner says. Within a forensic setting, Wagner says, it is conceivable that a suspect could use such measures to try to mask the brain patterns associated with memory.
Wagner says that his work to date suggests that the technology may have some utility in reading out brain patterns in cooperative individuals but that the uses are much more uncertain with uncooperative individuals. However, Wagner stresses that the method currently does not distinguish well between whether a person’s memory reflects true or false recognition. He says that it is premature to consider such evidence in the courts because many additional factors await future testing, including the effects of stress, practice, and time between the experience and the memory test.
Overgeneralizing the adolescent brain
A general challenge to the use of neuroscientific evidence in legal settings, Wagner says, is that most studies are at the group rather than the individual level. “The law cares about a particular individual in a particular situation right in front of them,” he says, and the science often cannot speak to that specificity.
Shen cites the challenge of making individualized inference from group-based data as one of the major ones facing use of neuroscience evidence in the court. “This issue has come up in the context of juvenile justice, where the adolescent brain development data confirms behavioral data that on average 17-year-olds are more impulsive than adults, but does not tell us whether a particular 17-year-old, namely the one on trial, was less able to control his/her actions on the day and in the manner in question,” he says.
Indeed, B.J. Casey of the Weill Medical College of Cornell University says that too often we overgeneralize the lack of self control among adolescents. Although adolescents do show poor self control as a group, some situations and individuals are more prone to this breakdown than others.
“It is not that teens can’t make decisions, they can and they can do so efficiently,” Casey says. “It is when they must make decisions in the heat of the moment – in presence of potential or perceived threats, among peers – that the court should consider diminished responsibility of teens while still holding them accountable for their behavior.” Research suggests that this diminished ability is due to the immature development of circuitry involved in processing of negative or positive cues in the environment in the subcortical limbic regions and then in regulating responses to those cues in the prefrontal cortex.
The body of research to date is at the group-level, however, and is not yet able to comment on the neurobiological maturity of an individual adolescent. To help provide more guidance on this issue in legal settings, Casey and colleagues are working alongside legal scholars on a developmental imaging study, funded by the MacArthur Foundation, that is examining behaviors relevant to juvenile criminal behavior, including impulsivity and peer influence.
Making real-world connections
The same type of work – to connect brain imaging to particular behaviors in the real-world – is ongoing in a number of other areas, including fMRI-based lie detection and linking negligence to specific mental states. “It’s a big leap to go from a laboratory setting, in which impulse control may be measured by one’s ability to not press a button in response to a stimulus, to the real-world, where the question is whether someone had requisite self-control not to tie up an innocent person and throw them off a bridge.” Shen says. “I don’t see neuroscience solving these big problems anytime soon, and so the question for law becomes: What do we do with this uncertainty? I think this is where we’re at right now, and where we’ll be for some time.”
“With a few notable exceptions such as death penalty cases, cases where a juvenile is facing a very stiff sentence, and litigating brain injury claims, ‘law and neuroscience’ is not familiar to most lawyers,” Shen says. “But this might change – and soon.” The ongoing work is vital, he says, for laying a foundation for a future that’s yet to come, and he hopes that more neuroscientists will increasingly collaborate with legal scholars.

Restoring paretic hand function via an artificial neural connection bridging spinal cord injury
Functional loss of limb control in individuals with spinal cord injury or stroke can be caused by interruption of the neural pathways between brain and spinal cord, although the neural circuits located above and below the lesion remain functional. An artificial neural connection that bridges the lost pathway and connects brain to spinal circuits has potential to ameliorate the functional loss. Yukio Nishimura, Associate Professor of the National Institute for Physiological Sciences, Japan, and Eberhard Fetz, Professor and Steve Perlmuter, Research Associate Professor at the University of Washington, United States investigated the effects of introducing a novel artificial neural connection which bridged a spinal cord lesion in a paretic monkey. This allowed the monkey to electrically stimulate the spinal cord through volitionally controlled brain activity and thereby to restore volitional control of the paretic hand. This study demonstrates that artificial neural connections can compensate for interrupted descending pathways and promote volitional control of upper limb movement after damage of neural pathways such as spinal cord injury or stroke. The study will be published online in Frontiers in Neural Circuits on April 11.
"The important point is that individuals who are paralyzed want to be able to move their own bodies by their own will. This study was different from what other research groups have done up to now; we didn’t use any prosthetic limbs like robotic arms to replace the original arm. What’s new is that we have been able to use this artificial neuronal connection bypassing the lesion site to restore volitional control of the subject’s own paretic arm. I think that for lesions of the corticospinal pathway this might even have a better chance of becoming a real prosthetic treatment rather than the sort of robotic devices that have been developed recently", Associate professor Nishimura said.
‘Revealing the scientific secrets of why people can’t stop after eating one potato chip
The scientific secrets underpinning that awful reality about potato chips — eat one and you’re apt to scarf ’em all down — began coming out of the bag today in research presented at the 245th National Meeting & Exposition of the American Chemical Society, the world’s largest scientific society. The meeting, which news media have termed “The World Series of Science,” features almost 12,000 presentations on new discoveries and other topics. It continues here through today.
Tobias Hoch, Ph.D., who conducted the study, said the results shed light on the causes of a condition called “hedonic hyperphagia” that plagues hundreds of millions of people around the world.
“That’s the scientific term for ‘eating to excess for pleasure, rather than hunger,’” Hoch said. “It’s recreational over-eating that may occur in almost everyone at some time in life. And the chronic form is a key factor in the epidemic of overweight and obesity that here in the United States threatens health problems for two out of every three people.”
The team at FAU Erlangen-Nuremberg, in Erlangen, Germany, probed the condition with an ingenious study in which scientists allowed one group of laboratory rats to feast on potato chips. Another group got bland old rat chow. Scientists then used high-tech magnetic resonance imaging (MRI) devices to peer into the rats’ brains, seeking differences in activity between the rats-on-chips and the rats-on-chow.
With recent studies showing that two-thirds of Americans are obese or overweight, this kind of recreational over-eating continues to be a major problem, health care officials say.
Among the reasons why people are attracted to these foods, even on a full stomach, was suspected to be the high ratio of fats and carbohydrates, which send a pleasing message to the brain, according to the team. In the study, while rats also were fed the same mixture of fat and carbohydrates found in the chips, the animals’ brains reacted much more positively to the chips.
“The effect of potato chips on brain activity, as well as feeding behavior, can only partially be explained by its fat and carbohydrate content,” explained Tobias Hoch, Ph.D. “There must be something else in the chips that make them so desirable,” he said.
In the study, rats were offered one out of three test foods in addition to their standard chow pellets: powdered standard animal chow, a mixture of fat and carbs, or potato chips. They ate similar amounts of the chow as well as the chips and the mixture, but the rats more actively pursued the potato chips, which can be explained only partly by the high energy content of this snack, he said. And, in fact, they were most active in general after eating the snack food.
Although carbohydrates and fats also were a source of high energy, the rats pursued the chips most actively and the standard chow least actively. This was further evidence that some ingredient in the chips was sparking more interest in the rats than the carbs and fats mixture, Hoch said.
Hoch explained that the team mapped the rats’ brains using Manganese-Enhanced Magnetic Resonance Imaging (MEMRI) to monitor brain activity. They found that the reward and addiction centers in the brain recorded the most activity. But the food intake, sleep, activity and motion areas also were stimulated significantly differently by eating the potato chips.
“By contrast, significant differences in the brain activity comparing the standard chow and the fat carbohydrate group only appeared to a minor degree and matched only partly with the significant differences in the brain activities of the standard chow and potato chips group,” he added.
Since chips and other foods affect the reward center in the brain, an explanation of why some people do not like snacks is that “possibly, the extent to which the brain reward system is activated in different individuals can vary depending on individual taste preferences,” according to Hoch. “In some cases maybe the reward signal from the food is not strong enough to overrule the individual taste.” And some people may simply have more willpower than others in choosing not to eat large quantities of snacks, he suggested.
If scientists can pinpoint the molecular triggers in snacks that stimulate the reward center in the brain, it may be possible to develop drugs or nutrients to add to foods that will help block this attraction to snacks and sweets, he said. The next project for the team, he added, is to identify these triggers. He added that MRI studies with humans are on the research agenda for the group.
On the other hand, Hoch said there is no evidence at this time that there might be a way to add ingredients to healthful, albeit rather unpopular, foods like Brussels sprouts to affect the rewards center in the brain positively.
Sound stimulation during sleep can enhance memory
Slow oscillations in brain activity, which occur during so-called slow-wave sleep, are critical for retaining memories. Researchers reporting online April 11 in the Cell Press journal Neuron have found that playing sounds synchronized to the rhythm of the slow brain oscillations of people who are sleeping enhances these oscillations and boosts their memory. This demonstrates an easy and noninvasive way to influence human brain activity to improve sleep and enhance memory.
"The beauty lies in the simplicity to apply auditory stimulation at low intensities—an approach that is both practical and ethical, if compared for example with electrical stimulation—and therefore portrays a straightforward tool for clinical settings to enhance sleep rhythms," says coauthor Dr. Jan Born, of the University of Tübingen, in Germany.
Dr. Born and his colleagues conducted their tests on 11 individuals on different nights, during which they were exposed to sound stimulations or to sham stimulations. When the volunteers were exposed to stimulating sounds that were in sync with the brain’s slow oscillation rhythm, they were better able to remember word associations they had learned the evening before. Stimulation out of phase with the brain’s slow oscillation rhythm was ineffective.
"Importantly, the sound stimulation is effective only when the sounds occur in synchrony with the ongoing slow oscillation rhythm during deep sleep. We presented the acoustic stimuli whenever a slow oscillation "up state" was upcoming, and in this way we were able to strengthen the slow oscillation, showing higher amplitude and occurring for longer periods," explains Dr. Born.
The researchers suspect that this approach might also be used more generally to improve sleep. “Moreover, it might be even used to enhance other brain rhythms with obvious functional significance—like rhythms that occur during wakefulness and are involved in the regulation of attention,” says Dr. Born.