Posts tagged plasticity

Posts tagged plasticity

Just 30 minutes of exercise has benefits for the brain
University of Adelaide neuroscientists have discovered that just one session of aerobic exercise is enough to spark positive changes in the brain that could lead to improved memory and coordination of motor skills.
A study conducted by researchers in the University’s Robinson Research Institute has found changes in the brain that were likely to make it more “plastic” after only 30 minutes of vigorous exercise.
The study involved a small group of healthy people aged in their late 20s to early 30s who rode exercise bikes. They were monitored for changes in the brain immediately after the exercise and again 15 minutes later.
"We saw positive changes in the brain straight away, and these improvements were sustained 15 minutes after the exercise had ended," says research leader Associate Professor Michael Ridding.
"Plasticity in the brain is important for learning, memory and motor skill coordination. The more ‘plastic’ the brain becomes, the more it’s able to reorganise itself, modifying the number and strength of connections between nerve cells and different brain areas."
Associate Professor Ridding says past research has shown that regular physical activity can have positive effects on brain function and plasticity, but it was unknown whether a stand-alone session of exercise would also have similar positive effects.
"We now have evidence suggesting that it does," he says. "This exercise-related change in the brain may, in part, explain why physical activity has a positive effect on memory and higher-level functions."
Associate Professor Ridding says there is now mounting evidence that engaging in aerobic exercise positively influences brain function in many ways - at cellular and molecular levels, as well as in the brain’s architecture.
"Although this was a small sample group, it helps us to better understand the overall picture of how exercise influences the brain," he says.
"We know that plasticity is also important for recovery from brain damage, so this opens up potential therapeutic avenues for patients.
"Further research will be required to see what the possible long-term benefits could be for patients as well as healthy people."

Mathematical model shows how the brain remains stable during learning
Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.
The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.
Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.
A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.
To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.
ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.
In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.
By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”
"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.
The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.
"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."
Have you ever eaten something totally new and it made you sick? Don’t give up; if you try the same food in a different place, your brain will be more “forgiving” of the new attempt. In a new study conducted by the Sagol Department of Neurobiology at the University of Haifa, researchers found for the first time that there is a link between the areas of the brain responsible for taste memory in a negative context and those areas in the brain responsible for processing the memory of the time and location of the sensory experience. When we experience a new taste without a negative context, this link doesn’t exist.

The area of the brain responsible for storing memories of new tastes is the taste cortex, found in a relatively insulated area of the human brain known as the insular cortex. The area responsible for formulating a memory of the place and time of the experience (the episode) is the hippocampus. Until now, researchers assumed that there was no direct connection between these areas – i.e., the processing of information about a taste is not related to the time or the place one experiences the taste. The accepted thinking was that a negative experience – for example, being exposed to a bad taste – would be negative in the same way anywhere, and the brain would create a memory of the taste itself, divorced from the time or place.
But in this new study, conducted by doctoral student Adaikkan Chinnakkaruppan in the laboratory of Prof. Kobi Rosenblum of the Sagol Department of Neurobiology at the University of Haifa, in cooperation with the Riken Institute, the leading brain research institute in Tokyo, the researchers demonstrate for the first time that there is a functional link between the two brain regions.
In the study the researchers sought to examine the relationship between the taste cortex (which is responsible for taste memory), and three different areas in the hippocampus: CA1, which is responsible for encoding the concept of space (where we are located); DG, the area responsible for encoding the time relationship between events; and CA3, responsible for filling in missing information. To do this the researchers took ordinary mice and mice that were genetically engineered by their Japanese colleagues such that these three areas of the brain functioned normally but were lacking plasticity, which did not allow new memories reliant on them to be created.
“In brain research, the manipulation we do must be very delicate and precise, otherwise the changes can make the entire experiment irrelevant to proving or refuting the research hypothesis,” said Prof. Rosenblum.
The mice were exposed to two new tastes, one that caused stomach pains (to mimic exposure to toxic food) and another that didn’t cause that feeling. By comparing the two groups it emerged that when the new taste was not accompanied by an association with toxic food, there was no difference between the normal mice and those whose various functional areas in the hippocampus didn’t allow plasticity. But when the taste caused a negative feeling, there was clear involvement of the CA1 area, which is responsible for encoding the space.
“The significance of this is that the moment we go back to the same place at which we experienced the taste associated with a bad feeling, subconsciously the negative memory will be much stronger than if we come to taste the same taste in a totally different place,” explained Prof. Rosenblum. Similarly, the DG area, which is responsible for encoding the time between incidents, was involved the more time that passed between the new taste and the stomach discomfort. “This means that even during a simple associative taste, the brain operates the hippocampus to produce an integrated experience that includes general information about the time between events and their location,” he said.
The findings, which were recently published in the Journal of Neuroscience, expose the complexity and richness of the simple sensory experiences that are engraved in our brains and that in most cases we aren’t even aware of. Moreover, the study can help explain behavioral results and the difficulty in producing memories when certain areas of the brain become dysfunctional following and illness or accident. The better we understand the encoding of simple sensory experiences in the brain and the link between the feeling, time and place of the experiences; we will better understand the complex process of creating memories and storing them in our brains.
(Source: newmedia-eng.haifa.ac.il)
New Molecular Target is Key to Enhanced Brain Plasticity
As Alzheimer’s disease progresses, it kills brain cells mainly in the hippocampus and cortex, leading to impairments in “neuroplasticity,” the mechanism that affects learning, memory, and thinking. Targeting these areas of the brain, scientists hope to stop or slow the decline in brain plasticity, providing a novel way to treat Alzheimer’s. Groundbreaking new research has discovered a new way to preserve the flexibility and resilience of the brain.
The study, led by Tel Aviv University’s Prof. Illana Gozes and published in Molecular Psychiatry, reveals a nerve cell protective molecular target that is essential for brain plasticity. According to Prof. Gozes, “This discovery offers the world a new target for drug design and an understanding of mechanisms of cognitive enhancement.”
Prof. Gozes is the incumbent of the Lily and Avraham Gildor Chair for the Investigation of Growth Factors and director of the Adams Super Center for Brain Studies at the Sackler Faculty of Medicine and a member of TAU’s Sagol School of Neuroscience. Also contributing to the study were Dr. Saar Oz, Oxana Kapitansky, Yanina Ivashco-Pachima, Anna Malishkevich, Dr. Joel Hirsch, Dr. Rina Rosin-Arbersfeld, and their students, all from TAU. TAU staff scientists Dr. Eliezer Gildai and Dr. Leonid Mittelman provided the state-of-the-art molecular cloning and cellular protein imaging necessary for the study.
Building on past breakthroughs
The new finding is based on Prof. Gozes’ discovery of NAP, a snippet of a protein essential for brain formation (activity-dependent neuroprotective protein [ADNP]). As a result of this discovery, a drug candidate that showed efficacy in mild cognitive impairment patients, a precursor to Alzheimer’s disease, is being developed. NAP protects the brain by stabilizing microtubules — tiny cellular cylinders that provide “railways and scaffolding systems” to move biological material within cells and provide a cellular skeleton. Microtubules are of particular importance to nerve cells, which have long processes and would otherwise collapse. In neurodegenerative diseases like Alzheimer’s, the microtubule network falls apart, hindering cellular communication and cognitive function.
"Clinical studies have shown that Davunetide (NAP) protects memory in patients suffering from mild cognitive impairment preceding Alzheimer’s disease," said Prof. Gozes. "While the mechanism was understood in broad terms, the precise molecular target remained a mystery for years. Now, in light of our new research, we know why and we know how to proceed."
Stabilizing microtubules
The breakthrough was the discovery of the mechanism promoting microtubule growth at the tips of the tubes (“rails”). The researchers found that the NAP structure allows it to bind to the tip of the growing microtubule, the emerging “railway,” through specific microtubule end-binding proteins, which adhere to microtubules a bit like locomotors to provide for growth and forward movement, while the other end of the microtubule may to be disintegrating. These growing tips enlist regulatory proteins that are essential for providing plasticity at the nerve cell connection points, the synapses.
"We have now revealed that ADNP through its NAP motif binds the microtubule end binding proteins and enhances nerve cell plasticity, providing for brain resilience. We then discovered that NAP further enhances ADNP microtubule binding," said Prof. Gozes.
Researchers hope their discovery will help move Davunetide (NAP) and related compounds into further clinical trials, increasing the potential of future clinical use. Prof. Gozes is continuing to investigate microtubule end-binding proteins to better understand their protective properties in the brain.
Visualising plastic changes to the brain
Painless therapy
Transcranial magnetic stimulation (TMS) is a painless, non-invasive stimulation method, where an electromagnetic coil held above the head is used to generate a strong magnetic field. This method is deployed to activate or inhibit specific brain regions. Even though the number of its medical applications is constantly on the increase, TMS’ precise neuronal mechanisms of action are not, as yet, very well understood. That is because imaging used for humans, such as fMRI (functional magnetic resonance imaging), do not possess the temporal resolution necessary for recording neural activities in milliseconds. More rapid measurement methods, such as EEG or MEG, on the other hand, are affected by the induced magnetic field, with the results that strong interferences are generated that cover important information regarding immediate TMS-based changes to brain activities.
Observing effect on neurons in real time
High-res images of TMS effects have now for the first time been successfully generated by RUB researchers in animal testing. The work group headed by PD Dr Dirk Jancke, Institut für Neuroinformatik, utilises voltage-sensitive dyes which, anchored in cell membranes, send out fluorescent light signals once neurons get activated or inhibited. By using light, the researchers avoided the problem of measurement of artefacts occurring due to magnetic fields. “We can now demonstrate in real time how one single TMS pulse suppresses brain activity across a considerable region, most likely through mass activation of inhibiting brain cells,” says Dr Jancke. With higher TMS frequencies, each additional TMS pulse generates an incremental increase in brain activity. “This results in a higher cortical activation state, which opens up a time window for plastic changes,” explains Dr Vladislav Kozyrev, the first author of the study.
Chances for patients
The increased neuronal excitability may be utilised to effect specific reorganisation of cell connections by means of targeted learning processes. For example, through visual training after TMS, the ability to identify image contours improves; moreover, a combination of these methods enhances contrast perception in patients with amblyopia - a disorder of sight acquired during child development. For many neurological diseases of the brain, such as epilepsy, depression and stroke, specific models have been developed. “Deployed in animal testing, our technology has delivered high spatiotemporal resolution imaging data of cortical activity changes,” says Dirk Jancke. “We are hoping that these data will enable us to optimise TMS parameters and learning processes in a targeted manner, which are going to be used in future to adapt this technology for medical treatment of humans.”
A long childhood feeds the hungry human brain
A five-year old’s brain is an energy monster. It uses twice as much glucose (the energy that fuels the brain) as that of a full-grown adult, a new study led by Northwestern University anthropologists has found.
The study helps to solve the long-standing mystery of why human children grow so slowly compared with our closest animal relatives.
It shows that energy funneled to the brain dominates the human body’s metabolism early in life and is likely the reason why humans grow at a pace more typical of a reptile than a mammal during childhood.
Results of the study will be published the week of Aug. 25 in the journal Proceedings of the National Academy of Sciences.
"Our findings suggest that our bodies can’t afford to grow faster during the toddler and childhood years because a huge quantity of resources is required to fuel the developing human brain," said Christopher Kuzawa, first author of the study and a professor of anthropology at Northwestern’s Weinberg College of Arts and Sciences. "As humans we have so much to learn, and that learning requires a complex and energy-hungry brain."
Kuzawa also is a faculty fellow at the Institute for Policy Research at Northwestern.
The study is the first to pool existing PET and MRI brain scan data — which measure glucose uptake and brain volume, respectively — to show that the ages when the brain gobbles the most resources are also the ages when body growth is slowest. At 4 years of age, when this “brain drain” is at its peak and body growth slows to its minimum, the brain burns through resources at a rate equivalent to 66 percent of what the entire body uses at rest.
The findings support a long-standing hypothesis in anthropology that children grow so slowly, and are dependent for so long, because the human body needs to shunt a huge fraction of its resources to the brain during childhood, leaving little to be devoted to body growth. It also helps explain some common observations that many parents may have.
"After a certain age it becomes difficult to guess a toddler or young child’s age by their size," Kuzawa said. "Instead you have to listen to their speech and watch their behavior. Our study suggests that this is no accident. Body growth grinds nearly to a halt at the ages when brain development is happening at a lightning pace, because the brain is sapping up the available resources."
It was previously believed that the brain’s resource burden on the body was largest at birth, when the size of the brain relative to the body is greatest. The researchers found instead that the brain maxes out its glucose use at age 5. At age 4 the brain consumes glucose at a rate comparable to 66 percent of the body’s resting metabolic rate (or more than 40 percent of the body’s total energy expenditure).
"The mid-childhood peak in brain costs has to do with the fact that synapses, connections in the brain, max out at this age, when we learn so many of the things we need to know to be successful humans," Kuzawa said.
"At its peak in childhood, the brain burns through two-thirds of the calories the entire body uses at rest, much more than other primate species," said William Leonard, co-author of the study. "To compensate for these heavy energy demands of our big brains, children grow more slowly and are less physically active during this age range. Our findings strongly suggest that humans evolved to grow slowly during this time in order to free up fuel for our expensive, busy childhood brains."
The area of the brain involved in multitasking and ways to train it have been identified by a research team at the IUGM Institut universitaire de gériatrie de Montréal and the University of Montreal. The research includes a model to better predict the effectiveness of this training. Cooking while having a conversation, watching a movie while browsing the Web, or driving while listening to a radio show – multitasking is an essential skill in our daily lives. Unfortunately, it decreases with age, which makes it harder for seniors to keep up, causes them stress, and decreases their confidence. Many commercial software applications promise to improve this ability through exercises. But are these exercises truly effective, and how do they work on the brain? The team addresses these issues in two papers published in AGE and PLOS ONE.

Targeted Action for a Specific Result
The findings are important because they may help scientists develop better targeted cognitive stimulation programs or improve existing training programs. Specialists sometimes question the usefulness of exercises that may be ineffective simply because they are poorly structured. “To improve your cardiovascular fitness, most people know you need to run laps on the track and not work on your flexibility. But the way targeted training correlates to cognition has been a mystery for a long time. Our work shows that there is also an association between the type of cognitive training performed and the resulting effect. This is true for healthy seniors who want to improve their attention or memory and is particularly important for patients who suffer from damage in specific areas of the brain. We therefore need to better understand the ways to activate certain areas of the brain and target this action to get specific results,” explained Sylvie Belleville, who led the research.
Researchers are now better able to map these effects on the functioning of very specific areas of the brain. Will we eventually be able to adapt the structure of our brains through highly targeted training? “We have a long road ahead to get to that point, and we don’t know for sure if that would indeed be a desirable outcome. However, our research findings can be used right away to improve the daily lives of aging adults as well as people who suffer from brain damage,” Dr. Belleville said.
The Right Combination of Plasticity and Attentional Control
In one of the studies, 48 seniors were randomly allocated to training that either worked on plasticity and attentional control or only involved simple practice. The researchers used functional magnetic resonance imaging to evaluate the impact of this training on various types of attentional tasks and on brain function. The team showed that training on plasticity and attentional control helped the participants develop their ability to multitask. However, performing two tasks simultaneously was not what improved this skill. For the exercises, the research participants instead had to modulate the amount of attention given to each task. They were first asked to devote 80% of their attention to task A and 20% to task B and then change the ratio to 50:50 or 20:80. This training was the only type that increased functioning in the middle prefrontal region, or the area known to be responsible for multitasking abilities and whose activation decreases with age. The researchers used this data to create a predictive model of the effects of cognitive training on the brain based on the subjects’ characteristics.
(Source: eurekalert.org)

How the brain stabilizes its connections in order to learn better
Throughout our lives, our brains adapt to what we learn and memorise. The brain is indeed made up of complex networks of neurons and synapses that are constantly re-configured. However, in order for learning to leave a trace, connections must be stabilized. A team at the University of Geneva (UNIGE) discovered a new cellular mechanism involved in the long-term stabilization of neuron connections, in which non-neuronal cells, called astrocytes, play a role unidentified until now. These results, published in Current Biology, will lead to a better understanding of neurodegenerative and neurodevelopmental diseases.
The central nervous system excitatory synapses – points of contact between neurons that allow them to transmit signals – are highly dynamic structures, which are continuously forming and dissolving. They are surrounded by non-neuronal cells, or glial cells, which include the distinctively star-shaped astrocytes. These cells form complex structures around synapses, and play a role in the transmission of cerebral information which was widely unknown before.
Plasticity and Stability
By increasing neuronal activity through whiskers stimulation of adult mice, the scientists were able to observe, in both the somatosensory cortex and the hippocampus, that this increased neuronal activity provokes an increase in astrocytes movements around synapses. The synapses, surrounded by astrocytes, re-organise their architecture, which protects them and increases their longevity. The team of researchers led by Dominique Muller, Professor in the Department of Fundamental Neuroscience of the Faculty of Medicine at UNIGE, developed new techniques that allowed them to specifically “control” the different synaptic structures, and to show that the phenomenon took place exclusively in the connections between neurons involved in learning. “In summary, the more the astrocytes surround the synapses, the longer the synapses last, thus allowing learning to leave a mark on memory,” explained Yann Bernardinelli, the lead author on this study.
This study identifies a new, two-way interaction between neurons and astrocytes, in which the learning process regulates the structural plasticity of astrocytes, who in turn determine the fate of the synapses. This mechanism indicates that astrocytes apparently play an important role in the processes of learning and memory, which present abnormally in various neurodegenerative and neurodevelopmental diseases, among which Alzheimer’s, autism, or Fragile X syndrome.
This discovery highlights the until now underestimated importance of cells which, despite being non-neuronal, participate in a crucial way in the cerebral mechanisms that allow us to learn and retain memories of what we have learned.

Running, Combined with Visual Experience, Restores Brain Function
In a new study by UC San Francisco scientists, running, when accompanied by visual stimuli, restored brain function to normal levels in mice that had been deprived of visual experience in early life.
In addition to suggesting a novel therapeutic strategy for humans with blindness in one eye caused by a congenital cataract, droopy eyelid, or misaligned eye, the new research—the latest in a series of UCSF studies exploring effects of locomotion on brain function—suggests that the adult brain may be far more capable of rewiring and repairing itself than previously thought.
In 2010, Michael P. Stryker, PhD, the W.F. Ganong Professor of Physiology, and postdoctoral fellow Cris Niell, PhD, now at the University of Oregon, made the surprising discovery that neurons in the visual area of the mouse brain fired much more robustly whenever the mice walked or ran.
Earlier this year, postdoctoral fellow Yu Fu, PhD, Stryker and a number of colleagues built on these findings, identifying and describing the neural circuit responsible for this locomotion-induced “high-gain state” in the visual cortex of the mouse brain.
Neither of these studies made clear, however, whether this circuit might have broader functional or clinical significance.
It has been known since the 1960s that visual areas of the brain do not develop normally if deprived of visual input during a “critical period” of brain development early in life. For example, in humans, if amblyopia (“lazy eye”) or other major eye problems are not surgically corrected in infancy, vision will never be normal in the affected eye—if such individuals lose sight in their “good” eye in later life, they are blind.
In the new research, published June 26, 2014 in the online journal eLife, Stryker and UCSF postdoctoral fellow Megumi Kaneko, MD, PhD, closed one eyelid of mouse pups at about 20 days after birth, and that eye was kept closed until the mice reached about five months of age.
As expected, the mice in which one eye had been closed during the critical developmental period showed sharply reduced neural activity in the part of the brain responsible for vision in that eye.
As in the previous UCSF experiments in this area, some mice were allowed to run freely on Styrofoam balls suspended on a cushion of air while recordings were made from their brains.
Little improvement was seen in the mice that had been deprived of visual input either when they were simply allowed to run or when they received visual training with the deprived eye not accompanied by walking or running.
But when the mice were exposed to the visual stimuli while they were running or walking, the results were dramatic: within a week the brain responses to those stimuli from the deprived eye were nearly identical to those from the normal eye, indicating that the circuits in the visual area of the brain representing the deprived eye had undergone a rapid reorganization, known in neuroscience as “plasticity.”
Interestingly, this recovery was stimulus-specific: if the brain activity of the mice was tested using a stimulus other than that they had seen while running, little or no recovery of function was apparent.
“We have no idea yet whether running puts the human cortex into a high-gain state that enhances plasticity, as it does the visual cortex of the mouse,” Stryker said, “but we are designing experiments to find out.”
Early life stress can leave lasting impacts on the brain
For children, stress can go a long way. A little bit provides a platform for learning, adapting and coping. But a lot of it — chronic, toxic stress like poverty, neglect and physical abuse — can have lasting negative impacts.
A team of University of Wisconsin-Madison researchers recently showed these kinds of stressors, experienced in early life, might be changing the parts of developing children’s brains responsible for learning, memory and the processing of stress and emotion. These changes may be tied to negative impacts on behavior, health, employment and even the choice of romantic partners later in life.
The study, published in the journal Biological Psychiatry, could be important for public policy leaders, economists and epidemiologists, among others, says study lead author and recent UW Ph.D. graduate Jamie Hanson.
"We haven’t really understood why things that happen when you’re 2, 3, 4 years old stay with you and have a lasting impact," says Seth Pollak, co-leader of the study and UW-Madison professor of psychology.
Yet, early life stress has been tied before to depression, anxiety, heart disease, cancer, and a lack of educational and employment success, says Pollak, who is also director of the UW Waisman Center’s Child Emotion Research Laboratory.
"Given how costly these early stressful experiences are for society … unless we understand what part of the brain is affected, we won’t be able to tailor something to do about it," he says.
For the study, the team recruited 128 children around age 12 who had experienced either physical abuse, neglect early in life or came from low socioeconomic status households.
Researchers conducted extensive interviews with the children and their caregivers, documenting behavioral problems and their cumulative life stress. They also took images of the children’s brains, focusing on the hippocampus and amygdala, which are involved in emotion and stress processing. They were compared to similar children from middle-class households who had not been maltreated.
Hanson and the team outlined by hand each child’s hippocampus and amygdala and calculated their volumes. Both structures are very small, especially in children (the word amygdala is Greek for almond, reflecting its size and shape in adults), and Hanson and Pollak say the automated software measurements from other studies may be prone to error.
Indeed, their hand measurements found that children who experienced any of the three types of early life stress had smaller amygdalas than children who had not. Children from low socioeconomic status households and children who had been physically abused also had smaller hippocampal volumes. Putting the same images through automated software showed no effects.
Behavioral problems and increased cumulative life stress were also linked to smaller hippocampus and amygdala volumes.
Why early life stress may lead to smaller brain structures is unknown, says Hanson, now a postdoctoral researcher at Duke University’s Laboratory for NeuroGenetics, but a smaller hippocampus is a demonstrated risk factor for negative outcomes. The amygdala is much less understood and future work will focus on the significance of these volume changes.
"For me, it’s an important reminder that as a society we need to attend to the types of experiences children are having," Pollak says. "We are shaping the people these individuals will become."
But the findings, Hanson and Pollak say, are just markers for neurobiological change; a display of the robustness of the human brain, the flexibility of human biology. They aren’t a crystal ball to be used to see the future.
"Just because it’s in the brain doesn’t mean it’s destiny," says Hanson.