Posts tagged brain activity

Posts tagged brain activity
Judgment and decision-making: brain activity indicates there is more than meets the eye
Published today in PLOS ONE, the study is the first in the world to show that it is possible to predict abstract judgments from brain waves, even though people were not conscious of making such judgments. The study also increases our understanding of impulsive behaviours and how to regulate it.
It found that researchers could predict from participants’ brain activity how exciting they found a particular image to be, and whether a particular image made them think more about the future or the present. This is true even though the brain activity was recorded before participants knew they were going to be asked to make these judgments.
Lead authors Dr Stefan Bode from the Melbourne School of Psychological Sciences and Dr Carsten Murawski from the University of Melbourne Department of Finance said these findings illustrated there was more information encoded in brain activity than previously assumed.
“We have found that brain activity when looking at images can encode judgments such as time reference, even when the viewer is not aware of making such judgments. Moreover, our results suggest that certain images can prompt a person to think about the present or the future,” they said.
The authors said the results contributed to our understanding of impulsive behaviours, especially where those behaviours were caused by ‘prompts’ in the world around us.
“For instance, consider someone trying to quit gambling who sees a gambling advertisement on TV. Our results suggest that even if this person is trying to ignore the ad, their brain may be unconsciously processing it and making it more likely that they will relapse,” he said.
The researchers used electroencephalography technology (EEG) to measure the electrical activity of people’s brains while they looked at different pictures. The pictures displayed images of food, social scenes or status symbols like cars and money.
After the EEG, researchers showed participants the same pictures again and asked questions about each image, such as how exciting they thought the image was or how strongly the image made them think of either the present or the future.
A statistical ‘decoding’ technique was then used to predict the judgments participants made about each of the pictures from the EEG brain activity that was recorded.
Co-author Daniel Bennett said just as certain prompts might cause impulsive behaviour, images could be used to prompt people to be more patient by regulating impulse control.
“Our results suggest that prompting people with images related to the future might cause processing outside awareness that could make it easier to think about the future. In theory, this could make people less impulsive and more likely to make healthy long-term decisions. These are hypotheses we will try to test in the future,” he said. The research was done in collaboration with the University of Cologne, Germany.
Sleep twitches light up the brain
A University of Iowa study has found twitches made during sleep activate the brains of mammals differently than movements made while awake.
Researchers say the findings show twitches during rapid eye movement (REM) sleep comprise a different class of movement and provide further evidence that sleep twitches activate circuits throughout the developing brain. In this way, twitches teach newborns about their limbs and what they can do with them.
“Every time we move while awake, there is a mechanism in our brain that allows us to understand that it is we who made the movement,” says Alexandre Tiriac, a fifth-year graduate student in psychology at the UI and first author of the study, which appeared this month in the journal Current Biology. “But twitches seem to be different in that the brain is unaware that they are self-generated. And this difference between sleep and wake movements may be critical for how twitches, which are most frequent in early infancy, contribute to brain development.”
Mark Blumberg, a psychology professor at the UI and senior author of the study, says this latest discovery is further evidence that sleep twitches— whether in dogs, cats or humans—are connected to brain development, not dreams.
“Because twitches are so different from wake movements,” he says, “these data put another nail in the coffin of the ‘chasing rabbits’ interpretation of twitches.”
For this study, Blumberg, Tiriac and fellow graduate student Carlos Del Rio-Bermudez studied the brain activity of unanesthetized rats between 8 and 10 days of age. They measured the brain activity while the animals were awake and moving and again while the rats were in REM sleep and twitching.
What they discovered was puzzling, at first.
“We noticed there was a lot of brain activity during sleep movements but not when these animals were awake and moving,” Tiriac says.
The researchers theorized that sensations coming back from twitching limbs during REM sleep were being processed differently in the brain than awake movements because they lacked what is known as “corollary discharge.”
First introduced by researchers in 1950, corollary discharge is a split-second message sent to the brain that allows animals—including rats, crickets, humans and more—to recognize and filter out sensations generated from their own actions. This filtering of sensations is what allows animals to distinguish between sensations arising from their own movements and those from stimuli in the outside world.
So, when the UI researchers noticed an increase in brain activity while the newborn rats were twitching during REM sleep but not when the animals were awake and moving, they conducted several follow-up experiments to determine whether sleep twitching is a unique self-generated movement that is processed as if it lacks corollary discharge.
The experiments were consistent in supporting the idea that sensations arising from twitches are not filtered: And without the filtering provided by corollary discharge, the sensations generated by twitching limbs are free to activate the brain and teach the newborn brain about the structure and function of the limbs.
“If twitches were like wake movements, the signals arising from twitching limbs would be filtered out,” Blumberg says. “That they are not filtered out suggests again that twitches are special—perhaps special because they are needed to activate developing brain circuits.”
The UI researchers were initially surprised to find the filtering system functioning so early in development.
“But what surprised us even more,” Blumberg says, “was that corollary discharge appears to be suspended during sleep in association with twitching, a possibility that – to our knowledge – has never before been entertained.”
A chemical in the brain plays a vital role in controlling the involuntary movements and vocal tics associated with Tourette Syndrome (TS), a new study has shown.

The research by psychologists at The University of Nottingham, published in the latest edition of the journal Current Biology, could offer a potential new target for the development of more effective treatments to suppress these unwanted symptoms.
The study, led by PhD student Amelia Draper under the supervision of Professor Stephen Jackson, found that higher levels of a neurochemical called GABA in a part of the brain known as the supplementary motor area (SMA) helps to dampen down hyperactivity in the cortical areas that produce movement.
By reducing this hyperactivity, only the strongest signals would get through and produce a movement.
Greater control
Amelia said: “This result is significant because new brain stimulation techniques can be used to increase or decrease GABA in targeted areas of the cortex. It may be possible that such techniques to adjust the levels of GABA in the SMA could help young people with TS gain greater control over their tics.”
Tourette Syndrome is a developmental disorder associated with these involuntary and repetitive vocal and movement tics. Although the exact cause of TS is unknown, research has shown that people with TS have alterations in their brain ‘circuitry’ that are involved in producing and controlling motor functions.
Both the primary motor cortex (M1) and the supplementary motor area (SMA) are thought to be hyperactive in the brains of those with TS, causing the tics which can be both embarrassing and disruptive, especially for children who often find it difficult to concentrate at school.
Tics can be partially controlled by many people with TS but this often takes enormous mental energy and can leave them exhausted towards the end of the day and can often make their tics more frequent and excessive when they ‘relax’. The majority of people diagnosed with TS in childhood manage to gain control over their tics gradually until they have only mild symptoms by early adulthood but this is often too late for some people who have had their education and social friendships disrupted.
Greater detail
The scientists used a technique called magnetic resonance spectroscopy (MRS) in a 7 Tesla Magnetic Resonance Imaging (MRI) scanner to measure the concentration of certain chemicals in the brain known as neurotransmitters which offer an indication of brain activity.
The chemicals were measured in the M1, the SMA and an area involved in visual processing (V1) which was used as a control (comparison) site. They tested a group of young people with TS and a matched group of typical young people with no known disorders.
They discovered that the people with TS had higher concentrations of GABA, which inhibits neuronal activity, in the SMA.
They used other neuroscience techniques to explore the result in greater detail, finding that having more GABA in the SMA meant that the people with Tourette Syndrome had less activity in the SMA when asked to perform a simple motor task, in this case tapping their finger, which they were able to measure using functional MRI.
Using another technique called transcranial magnetic stimulation (TMS) in which a magnetic field is passed over the brain to stimulate neuron activity, they found that those with the most GABA dampen down the brain activity in the M1 when preparing to make a movement. In contrast, the typically developing group increased their activity during movement preparation.
Paradoxical finding
Finally, they considered how GABA was related to brain structure, specifically the white matter fibre bundles that connect the two hemispheres of the brain, a structure called the corpus callosum. They discovered that those with the highest levels of GABA also had the most connecting fibres, leading them to conclude that the more connecting fibres there are then the more excitatory signals are being produced leading to the need for even more GABA to calm this excess hyperactivity.
The results could lead the way to more targeted approaches to controlling tics. New brain techniques such as transcranial direct-current stimulation (tdcs), a form of neurostimulation which uses constant, low level electrical current delivered directly to the brain via electrodes, has already been shown to be successful in increasing or decreasing GABA in targeted areas of the cortex.
Professor Stephen Jackson added: “This finding is paradoxical because prior to our finding, most scientists working on this topic would have thought that GABA levels in TS would be reduced and not increased as we show. This is because a distinction should be made between brain changes that are causes of the disorder (e.g., reduced GABA cells in some key brain areas) and secondary consequences of the disorder (e.g., increased release of GABA in key brain areas) that act to reduce the effects of the disorder.”
New tdcs devices, similar to commercially-available TENS machines, could potentially be produced to be used by young people with TS to ‘train’ their brains to help them gain control over their tics, offering the benefit that they could be relatively cheap and could be used in the home while performing other tasks such as watching television.
(Source: nottingham.ac.uk)
Inattention, hyperactivity, and impulsive behavior in children with ADHD can result in social problems and they tend to be excluded from peer activities. They have been found to have impaired recognition of emotional expression from other faces. The research group of Professor Ryusuke Kakigi of the National Institute for Physiological Sciences, National Institutes of Natural Sciences, in collaboration with Professor Masami K. Yamaguchi and Assistant Professor Hiroko Ichikawa of Chuo University first identified the characteristics of facial expression recognition of children with ADHD by measuring hemodynamic response in the brain and showed the possibility that the neural basis for the recognition of facial expression is different from that of typically developing children. The findings are discussed in Neuropsychologia (available online on Aug. 23, 2014).

The research group showed images of a happy expression or an angry expression to 13 children with ADHD and 13 typically developing children and identified the location of the brain activated at that time. They used non-invasive near-infrared spectroscopy to measure brain activity. Near-infrared light, which is likely to go through the body, was projected through the skull and the absorbed or scattered light was measured. The strength of the light depends on the concentration in “oxyhemoglobin” which gives the oxygen to the nerve cells working actively. The result was that typically developing children showed significant hemodynamic response to both the happy expression and angry expression in the right hemisphere of the brain. On the other hand, children with ADHD showed significant hemodynamic response only to the happy expression but brain activity specific for the angry expression was not observed. This difference in the neural basis for the recognition of facial expression might be responsible for impairment in social recognition and the establishment of peer-relationships.
(Source: eurekalert.org)
A new, easy-to-use EEG electrode set for the measurement of the electrical activity of the brain was developed in a recent study completed at the University of Eastern Finland. The solutions developed in the PhD study of Pasi Lepola, MSc, make it possible to attach the electrode set on the patient quickly, resulting in reliable results without any special treatment of the skin. As EEG measurements in emergency care are often performed in challenging conditions, the design of the electrode set pays particular attention to the reduction of electromagnetic interference from external sources.
EEG measurements can be used to detect such abnormalities in the electrical activity of the brain that require immediate treatment. These abnormalities are often indications of severe brain damage, cerebral infarction, cerebral haemorrhage, poisoning, or unspecified disturbed levels of consciousness. One of the most serious brain function abnormalities is a prolonged epileptic seizure, status epilepticus, which is impossible to diagnose without an EEG measurement. In many cases, a rapidly performed EEG measurement and the start of a proper treatment significantly reduces the need for aftercare and rehabilitation. This, in turn, drastically improves the cost-effectiveness of the treatment chain.
Although the benefits of EEG measurements are indisputable, they remain underused in acute and emergency care. A significant reason for this is the fact that the electrode sets available on the markets are difficult to attach on the patient, and their use requires special skills and constant training. This new type of an electrode set is expected to provide solutions for making EEG measurements feasible at as an early stage as possible.

The EEG electrode set was produced using screen printing technology, in which silver ink was used to print the conductors and measurement electrodes on a flexible polyester film. The EEG electrode set consists of 16 hydrogel-coated electrodes which, unlike in the traditional method, are placed on the hair-free areas of the patient’s head, making it easy to attach. The new EEG electrode set significantly speeds up the measurement process because there is no need to scrape the patient’s skin or to use any separate gels. As the electrode set is flexible and solid, the electrodes get automatically placed in their correct places. Furthermore, there is no need to move the patient’s head when putting on the EEG electrode set, which is especially important in patients possibly suffering from a neck or skull injury. Due to the fact that the disposable electrode set is easy and fast to use, it is particularly well-suited to be used in emergency care, in ambulances and even in field conditions. Thanks to the materials used, the electrode set does not interfere with any magnetic resonance or computed tomography imaging the patient may undergo.
The performance of the electrode set was tested by using various electrical tests, on several volunteers, and in real patient cases. The results were compared to those obtained by traditional EEG methods.

The PhD study also focused on the use of screen printing technology solutions to protect electrodes against electromagnetic interference. The silver or graphite shielding layer printed to the outer edge of the electrode set was discovered to significantly reduce external interference on the EEG signal. This shielding layer can be easily and cost-efficiently introduced to all measurement electrodes produced with similar methods. Protecting the electrode with a shielding layer is beneficial when measuring weak signals in conditions that contain external interference.
(Source: uef.fi)
Brain activity can be used to tell whether someone recognizes details they encountered in normal, daily life, which may have implications for criminal investigations and use in courtrooms, new research shows.

The findings, published in Psychological Science, a journal of the Association for Psychological Science, suggest that a particular brain wave, known as P300, could serve as a marker that identifies places, objects, or other details that a person has seen and recognizes from everyday life.
Research using EEG recordings of brain activity has shown that the P300 brain wave tends to be large when a person recognizes a meaningful item among a list of nonmeaningful items. Using P300, researchers can give a subject a test called the Concealed Information Test (CIT) to try to determine whether they recognize information that is related to a crime or other event.
Most studies investigating P300 and recognition have been conducted in lab settings that are far removed from the kinds of information a real witness or suspect might be exposed to. This new study marks an important advance, says lead research John B. Meixner of Northwestern University, because it draws on details from activities in participants’ normal, daily lives.
“Much like a real crime, our participants made their own decisions and were exposed to all of the distracting information in the world,” he explains.
“Perhaps the most surprising finding was the extent to which we could detect very trivial details from a subject’s day, such as the color of umbrella that the participant had used,” says Meixner. “This precision is exciting for the future because it indicates that relatively peripheral crime details, such as physical features of the crime scene, might be usable in a real-world CIT — though we still need to do much more work to learn about this.”
To achieve a more realistic CIT, Meixner and co-author J. Peter Rosenfeld outfitted 24 college student participants with small cameras that recorded both video and sound — the students wore the cameras clipped to their clothes for 4 hours as they went about their day.
For half of the students, the researchers used the recordings to identify details specific to each person’s day, which became “probe” items for that person. The researchers also came up with corresponding, “irrelevant” items that the student had not encountered — if the probe item was a specific grocery store, for example, the irrelevant items might include other grocery stores.
For the other half of the students, the “probe” items related to details or items they had not encountered, but which were instead drawn from the recordings of other participants. The researchers wanted to simulate a real investigation, in which a suspect with knowledge of a crime would be shown the same crime-related details as a suspect who may have no crime-related knowledge.
The next day, all of the students returned to the lab and were shown a series of words that described different details or items (i.e., the probe and irrelevant items), while their brain activity was recorded via EEG.
The results showed that the P300 was larger for probe items than for irrelevant items, but only for the students who had actually seen or encountered the probe.
Further analyses revealed that P300 responses effectively distinguished probe items from irrelevant items on the level of each individual participant, suggesting that it is a robust and reliable marker of recognition.
These findings have implications for memory research, but they may also have real-world application in the domain of criminal law given that some countries, like Japan and Israel, use the CIT in criminal investigations.
“One reason that the CIT has not been used in the US is that the test may not meet the criteria to be admissible in a courtroom,” says Meixner. “Our work may help move the P300-based CIT one step closer to admissibility by demonstrating the test’s validity and reliability in a more realistic context.”
Meixner, Rosenfeld, and colleagues plan on investigating additional factors that may impact detection, including whether images from the recordings may be even more effective at eliciting recognition than descriptive words – preliminary data suggest this may be the case.

No sedative necessary: Scientists discover new “sleep node” in the brain
A sleep-promoting circuit located deep in the primitive brainstem has revealed how we fall into deep sleep. Discovered by researchers at Harvard School of Medicine and the University at Buffalo School of Medicine and Biomedical Sciences, this is only the second “sleep node” identified in the mammalian brain whose activity appears to be both necessary and sufficient to produce deep sleep.
Published online in August in Nature Neuroscience, the study demonstrates that fully half of all of the brain’s sleep-promoting activity originates from the parafacial zone (PZ) in the brainstem. The brainstem is a primordial part of the brain that regulates basic functions necessary for survival, such as breathing, blood pressure, heart rate and body temperature.
“The close association of a sleep center with other regions that are critical for life highlights the evolutionary importance of sleep in the brain,” says Caroline E. Bass, assistant professor of Pharmacology and Toxicology in the UB School of Medicine and Biomedical Sciences and a co-author on the paper.
The researchers found that a specific type of neuron in the PZ that makes the neurotransmitter gamma-aminobutyric acid (GABA) is responsible for deep sleep. They used a set of innovative tools to precisely control these neurons remotely, in essence giving them the ability to turn the neurons on and off at will.
“These new molecular approaches allow unprecedented control over brain function at the cellular level,” says Christelle Ancelet, postdoctoral fellow at Harvard School of Medicine. “Before these tools were developed, we often used ‘electrical stimulation’ to activate a region, but the problem is that doing so stimulates everything the electrode touches and even surrounding areas it didn’t. It was a sledgehammer approach, when what we needed was a scalpel.”
“To get the precision required for these experiments, we introduced a virus into the PZ that expressed a ‘designer’ receptor on GABA neurons only but didn’t otherwise alter brain function,” explains Patrick Fuller, assistant professor at Harvard and senior author on the paper. “When we turned on the GABA neurons in the PZ, the animals quickly fell into a deep sleep without the use of sedatives or sleep aids.”
How these neurons interact in the brain with other sleep and wake-promoting brain regions still need to be studied, the researchers say, but eventually these findings may translate into new medications for treating sleep disorders, including insomnia, and the development of better and safer anesthetics.
“We are at a truly transformative point in neuroscience,” says Bass, “where the use of designer genes gives us unprecedented ability to control the brain. We can now answer fundamental questions of brain function, which have traditionally been beyond our reach, including the ‘why’ of sleep, one of the more enduring mysteries in the neurosciences.”
Neuroscientists decode conscious experiences with Hitchcock film
Western researchers have extended their game-changing brain scanning techniques by showing that a short Alfred Hitchcock movie can be used to detect consciousness in vegetative state patients. The study included a Canadian participant who had been entirely unresponsive for 16 years, but is now known to be aware and able to follow the plot of movies.
Lorina Naci, a postdoctoral fellow from Western’s Brain and Mind Institute, and her Western colleagues, Rhodri Cusack, Mimma Anello and Adrian Owen, reported their findings today in The Proceedings of the National Academy of Sciences of the USA (PNAS), in a study titled, A common neural code for similar conscious experiences in different individuals.
While inside the 3T Magnetic Resonance Imaging (MRI) Scanner at Western’s Centre for Functional and Metabolic Mapping, participants watched a highly engaging short film by Alfred Hitchcock. Movie viewing elicited a common pattern of synchronized brain activity. The long-time unresponsive participant’s brain response during the same movie strongly resembled that of the healthy participants, suggesting not only that he was consciously aware, but also that he understood the movie.
“For the first time, we show that a patient with unknown levels of consciousness can monitor and analyze information from their environment, in the same way as healthy individuals,” said Naci, lead researcher on the new study. “We already know that up to one in five of these patients are misdiagnosed as being unconscious and this new technique may reveal that that number is even higher.”
Owen, the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging, explained, “This approach can detect not only whether a patient is conscious, but also what that patient might be thinking. Thus, it has important practical and ethical implications for the patient’s standard of care and quality of life.”
The researchers hope that this novel method will enable better understanding of behaviorally unresponsive patients, who may be misdiagnosed as lacking consciousness.
(Image caption: These figures show lagged maturation of connections in ADHD between the default mode network, involved in internally-directed thought (i.e., daydreaming) and shown on the left of each figure, and two brain networks involved in externally-focused attention, shown on the right of each figure. The width of each arc represents the number of lagged connections between two regions within each network. Connections that normally increase with age and that are hypoconnected in ADHD are shown in blue; connections that normally decrease with age and that are hyperconnected in ADHD are shown in red.)
Slow to mature, quick to distract: ADHD brain study finds slower development of key connections
A peek inside the brains of more than 750 children and teens reveals a key difference in brain architecture between those with attention deficit hyperactivity disorder and those without.
Kids and teens with ADHD, a new study finds, lag behind others of the same age in how quickly their brains form connections within, and between, key brain networks.
The result: less-mature connections between a brain network that controls internally-directed thought (such as daydreaming) and networks that allow a person to focus on externally-directed tasks. That lag in connection development may help explain why people with ADHD get easily distracted or struggle to stay focused.
What’s more, the new findings, and the methods used to make them, may one day allow doctors to use brain scans to diagnose ADHD — and track how well someone responds to treatment. This kind of neuroimaging “biomarker” doesn’t yet exist for ADHD, or any psychiatric condition for that matter.
The new findings come from a team in the University of Michigan Medical School’s Department of Psychiatry. They used highly advanced computing techniques to analyze a large pool of detailed brain scans that were publicly shared for scientists to study. Their results are published in the Proceedings of the National Academy of Sciences.
Lead author Chandra Sripada, M.D., Ph.D., and colleagues looked at the brain scans of 275 kids and teens with ADHD, and 481 others without it, using “connectomic” methods that can map interconnectivity between networks in the brain.
The scans, made using function magnetic resonance imaging (fMRI) scanners, show brain activity during a resting state. This allows researchers to see how a number of different brain networks, each specialized for certain types of functions, were “talking” within and amongst themselves.
The researchers found lags in development of connection within the internally-focused network, called the default mode network or DMN, and in development of connections between DMN and two networks that process externally-focused tasks, often called task-positive networks, or TPNs. They could even see that the lags in connection development with the two task-related networks — the frontoparietal and ventral attention networks — were located primarily in two specific areas of the brain.
The new findings mesh well with what other researchers have found by examining the physical structure of the brains of people with and without ADHD in other ways.
Such research has already shown alterations in regions within DMN and TPNs. So, the new findings build on that understanding and add to it.
The findings are also relevant to thinking about the longitudinal course of ADHD from childhood to adulthood. For instance, some children and teens “grow out” of the disorder, while for others the disorder persists throughout adulthood. Future studies of brain network maturation in ADHD could shed light into the neural basis for this difference.
“We and others are interested in understanding the neural mechanisms of ADHD in hopes that we can contribute to better diagnosis and treatment,” says Sripada, an assistant professor and psychiatrist who holds a joint appointment in the U-M Philosophy department and is a member of the U-M Center for Computational Medicine and Bioinformatics. “But without the database of fMRI images, and the spirit of collaboration that allowed them to be compiled and shared, we would never have reached this point.”
Sripada explains that in the last decade, functional medical imaging has revealed that the human brain is functionally organized into large-scale connectivity networks. These networks, and the connections between them, mature throughout early childhood all the way to young adulthood. “It is particularly noteworthy that the networks we found to have lagging maturation in ADHD are linked to the very behaviors that are the symptoms of ADHD,” he says.
Studying the vast array of connections in the brain, a field called connectomics, requires scientists to be able to parse through not just the one-to-one communications between two specific brain regions, but the patterns of communication among thousands of nodes within the brain. This requires major computing power and access to massive amounts of data – which makes the open sharing of fMRI images so important.
“The results of this study set the stage for the next phase of this research, which is to examine individual components of the networks that have the maturational lag,” he says. “This study provides a coarse-grained understanding, and now we want to examine this phenomenon in a more fine-grained way that might lead us to a true biological marker, or neuromarker, for ADHD.”
Sripada also notes that connectomics could be used to examine other disorders with roots in brain connectivity – including autism, which some evidence has suggested stems from over-maturation of some brain networks, and schizophrenia, which may arise from abnormal connections. Pooling more fMRI data from people with these conditions, and depression, anxiety, bipolar disorder and more could boost connectomics studies in those fields.
(Image caption: Shown are fMRI scans across all subjects in the study. The yellow and red areas in Section A represent parts of the brain that are activated while subjects are forming “gist memories” of pictures viewed. Section B represents areas of increased activation, shown in yellow and red, as detailed memories are being formed. Credit: Image courtesy of Jagust Lab)
Researchers find neural compensation in people with Alzheimer’s-related protein
The human brain is capable of a neural workaround that compensates for the buildup of beta-amyloid, a destructive protein associated with Alzheimer’s disease, according to a new study led by UC Berkeley researchers.
The findings, published today (Sunday, Sept. 14) in the journal Nature Neuroscience, could help explain how some older adults with beta-amyloid deposits in their brain retain normal cognitive function while others develop dementia.
“This study provides evidence that there is plasticity or compensation ability in the aging brain that appears to be beneficial, even in the face of beta-amyloid accumulation,” said study principal investigator Dr. William Jagust, a professor with joint appointments at UC Berkeley’s Helen Wills Neuroscience Institute, the School of Public Health and Lawrence Berkeley National Laboratory.
Previous studies have shown a link between increased brain activity and beta-amyloid deposits, but it was unclear whether the activity was tied to better mental performance.
The study included 22 healthy young adults and 49 older adults who had no signs of mental decline. Brain scans showed that 16 of the older subjects had beta-amyloid deposits, while the remaining 55 adults did not.
The researchers used functional magnetic resonance imaging (fMRI) to track the brain activity of subjects in the process of memorizing pictures of various scenes. Afterwards, the researchers tested the subjects’ “gist memory” by asking them to confirm whether a written description of a scene – such as a boy doing a handstand – corresponded to a picture previously viewed. Subjects were then asked to confirm whether specific written details of a scene – such as the color of the boy’s shirt – were true.
“Generally, the groups performed equally well in the tasks, but it turned out that for people with beta-amyloid deposits in the brain, the more detailed and complex their memory, the more brain activity there was,” said Jagust. “It seems that their brain has found a way to compensate for the presence of the proteins associated with Alzheimer’s.”
What remains unclear, said Jagust, is why some people with beta-amyloid deposits are better at using different parts of their brain than others. Previous studies suggest that people who engage in mentally stimulating activities throughout their lives have lower levels of beta-amyloid.
“I think it’s very possible that people who spend a lifetime involved in cognitively stimulating activity have brains that are better able to adapt to potential damage,” said Jagust.