Neuroscience

Articles and news from the latest research reports.

Posts tagged plasticity

124 notes

Brain activity drives dynamic changes in neural fiber insulation
The brain is a wonderfully flexible and adaptive learning tool. For decades, researchers have known that this flexibility, called plasticity, comes from selective strengthening of well-used synapses — the connections between nerve cells.
Now, researchers at the Stanford University School of Medicine have demonstrated that brain plasticity also comes from another mechanism: activity-dependent changes in the cells that insulate neural fibers and make them more efficient. These cells form a specialized type of insulation called myelin.
“Myelin plasticity is a fascinating concept that may help to explain how the brain adapts in response to experience or training,” said Michelle Monje, MD, PhD, assistant professor of neurology and neurological sciences.
The researchers’ findings are described in a paper published online April 10 in Science Express.
“The findings illustrate a form of neural plasticity based in myelin, and future work on the molecular mechanisms responsible may ultimately shed light on a broad range of neurological and psychiatric diseases,” said Monje, senior author of the paper. The lead authors of the study are Stanford postdoctoral scholar Erin Gibson, PhD, and graduate student David Purger.
Sending neural impulses quickly down a long nerve fiber requires insulation with myelin, which is formed by a cell called an oligodendrocyte that wraps itself around a neuron. Even small changes in the structure of this insulating sheath, such as changes in its thickness, can dramatically affect the speed of neural-impulse conduction. Demyelinating disorders, such as multiple sclerosis, attack these cells and degrade nerve transmission, especially over long distances.
Myelin-insulated nerve fibers make up the “white matter” of the brain, the vast tracts that connect one information-processing area of the brain to another. “If you think of the brain’s infrastructure as a city, the white matter is like the roads, highways and freeways that connect one place to another,” Monje said.
In the study, Monje and her colleagues showed that nerve activity prompts oligodendrocyte precursor cell proliferation and differentiation into myelin-forming oligodendrocytes. Neuronal activity also causes an increase in the thickness of the myelin sheaths within the active neural circuit, making signal transmission along the neural fiber more efficient. It’s much like a system for improving traffic flow along roadways that are heavily used, Monje said. And as with a transportation system, improving the routes that are most productive makes the whole system more efficient.
In recent years, researchers have seen clues that nerve cell activity could promote the growth of myelin insulation. There have been studies that showed a correlation between experience and myelin dynamics, and studies of isolated cells in a dish suggesting a relationship between neuronal activity and myelination. But there has been no way to show that neuronal activity directly causes myelin changes in an intact brain. “You can’t really implant an electrode in the brain to answer this question because the resulting injury changes the behavior of the cells,” Monje said.
The solution was a relatively new and radical technique called optogenetics. Scientists insert genes for a light-sensitive ion channel into a specific group of neurons. Those neurons can be made to fire when exposed to particular wavelengths of light. In the study, Monje and her colleagues used mice with light-sensitive ion channels in an area of their brains that controls movement. The scientists could then turn on and off certain movement behaviors in the mice by turning on and off the light. Because the light diffuses from a source placed at the surface of the brain down to the neurons being studied, there was no need to insert a probe directly next to the neurons, which would have created an injury.
By directly stimulating the neurons with light, the researchers were able to show it was the activation of the neurons that prompted the myelin-forming cells to respond.
Further research could reveal exactly how activity promotes oligodendrocyte-precursor-cell proliferation and maturation, as well as dynamic changes in myelin. Such a molecular understanding could help researchers develop therapeutic strategies that promote myelin repair in diseases in which myelin is degraded, such as multiple sclerosis, the leukodystrophies and spinal cord injury.
“Conversely, when growth of these cells is dysregulated, how does that contribute to disease?” Monje said. One particular area of interest for her is a childhood brain cancer called diffuse intrinsic pontine glioma. The cancer, which usually strikes children between 5 and 9 years old and is inevitably fatal, occurs when the brain myelination that normally takes place as kids become more physically coordinated goes awry, and the brain cells grow out of control.

Brain activity drives dynamic changes in neural fiber insulation

The brain is a wonderfully flexible and adaptive learning tool. For decades, researchers have known that this flexibility, called plasticity, comes from selective strengthening of well-used synapses — the connections between nerve cells.

Now, researchers at the Stanford University School of Medicine have demonstrated that brain plasticity also comes from another mechanism: activity-dependent changes in the cells that insulate neural fibers and make them more efficient. These cells form a specialized type of insulation called myelin.

“Myelin plasticity is a fascinating concept that may help to explain how the brain adapts in response to experience or training,” said Michelle Monje, MD, PhD, assistant professor of neurology and neurological sciences.

The researchers’ findings are described in a paper published online April 10 in Science Express.

“The findings illustrate a form of neural plasticity based in myelin, and future work on the molecular mechanisms responsible may ultimately shed light on a broad range of neurological and psychiatric diseases,” said Monje, senior author of the paper. The lead authors of the study are Stanford postdoctoral scholar Erin Gibson, PhD, and graduate student David Purger.

Sending neural impulses quickly down a long nerve fiber requires insulation with myelin, which is formed by a cell called an oligodendrocyte that wraps itself around a neuron. Even small changes in the structure of this insulating sheath, such as changes in its thickness, can dramatically affect the speed of neural-impulse conduction. Demyelinating disorders, such as multiple sclerosis, attack these cells and degrade nerve transmission, especially over long distances.

Myelin-insulated nerve fibers make up the “white matter” of the brain, the vast tracts that connect one information-processing area of the brain to another. “If you think of the brain’s infrastructure as a city, the white matter is like the roads, highways and freeways that connect one place to another,” Monje said.

In the study, Monje and her colleagues showed that nerve activity prompts oligodendrocyte precursor cell proliferation and differentiation into myelin-forming oligodendrocytes. Neuronal activity also causes an increase in the thickness of the myelin sheaths within the active neural circuit, making signal transmission along the neural fiber more efficient. It’s much like a system for improving traffic flow along roadways that are heavily used, Monje said. And as with a transportation system, improving the routes that are most productive makes the whole system more efficient.

In recent years, researchers have seen clues that nerve cell activity could promote the growth of myelin insulation. There have been studies that showed a correlation between experience and myelin dynamics, and studies of isolated cells in a dish suggesting a relationship between neuronal activity and myelination. But there has been no way to show that neuronal activity directly causes myelin changes in an intact brain. “You can’t really implant an electrode in the brain to answer this question because the resulting injury changes the behavior of the cells,” Monje said.

The solution was a relatively new and radical technique called optogenetics. Scientists insert genes for a light-sensitive ion channel into a specific group of neurons. Those neurons can be made to fire when exposed to particular wavelengths of light. In the study, Monje and her colleagues used mice with light-sensitive ion channels in an area of their brains that controls movement. The scientists could then turn on and off certain movement behaviors in the mice by turning on and off the light. Because the light diffuses from a source placed at the surface of the brain down to the neurons being studied, there was no need to insert a probe directly next to the neurons, which would have created an injury.

By directly stimulating the neurons with light, the researchers were able to show it was the activation of the neurons that prompted the myelin-forming cells to respond.

Further research could reveal exactly how activity promotes oligodendrocyte-precursor-cell proliferation and maturation, as well as dynamic changes in myelin. Such a molecular understanding could help researchers develop therapeutic strategies that promote myelin repair in diseases in which myelin is degraded, such as multiple sclerosis, the leukodystrophies and spinal cord injury.

“Conversely, when growth of these cells is dysregulated, how does that contribute to disease?” Monje said. One particular area of interest for her is a childhood brain cancer called diffuse intrinsic pontine glioma. The cancer, which usually strikes children between 5 and 9 years old and is inevitably fatal, occurs when the brain myelination that normally takes place as kids become more physically coordinated goes awry, and the brain cells grow out of control.

Filed under brain activity plasticity myelin neural fibers oligodendrocytes optogenetics nerve cells neuroscience science

273 notes

New respect for primary visual cortex



In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

New respect for primary visual cortex

In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.

Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.

“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”

Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.

The value of V1

In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.

For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.

On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.

Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.

Implications for human disease

The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”

“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”

“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”

But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.

“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”

Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.

On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

Filed under primary visual cortex sequence learning learning V1 plasticity neurons neuroscience science

268 notes

Researchers Identify Brain Differences Linked to Insomnia

Johns Hopkins researchers report that people with chronic insomnia show more plasticity and activity than good sleepers in the part of the brain that controls movement.

"Insomnia is not a nighttime disorder," says study leader Rachel E. Salas, M.D., an assistant professor of neurology at the Johns Hopkins University School of Medicine. "It’s a 24-hour brain condition, like a light switch that is always on. Our research adds information about differences in the brain associated with it."

image

Salas and her team, reporting in the March issue of the journal Sleep, found that the motor cortex in those with chronic insomnia was more adaptable to change - more plastic - than in a group of good sleepers. They also found more “excitability” among neurons in the same region of the brain among those with chronic insomnia, adding evidence to the notion that insomniacs are in a constant state of heightened information processing that may interfere with sleep.

Researchers say they hope their study opens the door to better diagnosis and treatment of the most common and often intractable sleep disorder that affects an estimated 15 percent of the United States population.

To conduct the study, Salas and her colleagues from the Department of Psychiatry and Behavioral Sciences and the Department of Physical Medicine and Rehabilitation used transcranial magnetic stimulation (TMS), which painlessly and noninvasively delivers electromagnetic currents to precise locations in the brain and can temporarily and safely disrupt the function of the targeted area. TMS is approved by the U.S. Food and Drug Administration to treat some patients with depression by stimulating nerve cells in the region of the brain involved in mood control.

The study included 28 adult participants - 18 who suffered from insomnia for a year or more and 10 considered good sleepers with no reports of trouble sleeping. Each participant was outfitted with electrodes on their dominant thumb as well as an accelerometer to measure the speed and direction of the thumb.

The researchers then gave each subject 65 electrical pulses using TMS, stimulating areas of the motor cortex and watching for involuntary thumb movements linked to the stimulation. Subsequently, the researchers trained each participant for 30 minutes, teaching them to move their thumb in the opposite direction of the original involuntary movement. They then introduced the electrical pulses once again.

The idea was to measure the extent to which participants’ brains could learn to move their thumbs involuntarily in the newly trained direction. The more the thumb was able to move in the new direction, the more likely their motor cortexes could be identified as more plastic.

Because lack of sleep at night has been linked to decreased memory and concentration during the day, Salas and her colleagues suspected that the brains of good sleepers could be more easily retrained. The results, however, were the opposite. The researchers found much more plasticity in the brains of those with chronic insomnia.

Salas says the origins of increased plasticity in insomniacs are unclear, and it is not known whether the increase is the cause of insomnia. It is also unknown whether this increased plasticity is beneficial, the source of the problem or part of a compensatory mechanism to address the consequences of sleep deprivation associated with chronic insomnia. Patients with chronic phantom pain after limb amputation and with dystonia, a neurological movement disorder in which sustained muscle contractions cause twisting and repetitive movements, also have increased brain plasticity in the motor cortex, but to detrimental effect.

Salas says it is possible that the dysregulation of arousal described in chronic insomnia - increased metabolism, increased cortisol levels, constant worrying - might be linked to increased plasticity in some way. Diagnosing insomnia is solely based on what the patient reports to the provider; there is no objective test. Neither is there a single treatment that works for all people with insomnia. Treatment can be a hit or miss in many patients, Salas says.

She says this study shows that TMS may be able to play a role in diagnosing insomnia, and more importantly, she says, potentially prove to be a treatment for insomnia, perhaps through reducing excitability.

(Source: hopkinsmedicine.org)

Filed under insomnia plasticity motor cortex sleep transcranial magnetic stimulation neuroscience science

105 notes

Visual System Can Retain Plasticity, Even After Extended Early Blindness

image

Image: Fotolia

Deprivation of vision during critical periods of childhood development has long been thought to result in irreversible vision loss. Now, researchers from the Schepens Eye Research Institute/Massachusetts Eye and Ear, Harvard Medical School (HMS) and Massachusetts Institute of Technology (MIT) have challenged that theory by studying a unique population of pediatric patients who were blind during these critical periods before removal of bilateral cataracts. The researchers found improvement after sight onset in contrast sensitivity tests, which measure basic visual function and have well-understood neural underpinnings. Their results show that the human visual system can retain plasticity beyond critical periods, even after early and extended blindness. Their findings were recently published in the Proceedings of the National Advancement of Science (PNAS) Early Edition.

Read more

Filed under visual system vision loss plasticity critical period neuroscience science

147 notes

Brain research provides insight into language learning

Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

image

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.  

The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”

“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”

Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.

“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.

“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.   

“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”

While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said. 

“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.

Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.

(Source: asunews.asu.edu)

Filed under mirror neuron system language acquisition language learning plasticity neuroscience science

339 notes

Tinnitus discovery opens door to possible new treatment avenues
For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.
Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.
The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.
The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.
Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.
That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.
Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.
She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.
“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.
The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.
But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.
It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 
Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.
Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.
Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.
Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.
In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.
Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.
“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.
But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Tinnitus discovery opens door to possible new treatment avenues

For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.

Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.

The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.

The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.

Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.

That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.

Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.

She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.

“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.

The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.

But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.

It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 

Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.

Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.

Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.

Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.

In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.

Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.

“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.

But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Filed under tinnitus hearing hearing loss plasticity dorsal cochlear nucleus neurons neuroscience science

272 notes

Balancing old and new skills
To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.
A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.
The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.
“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceeding of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”
As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.
Noisy circuits
As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.
This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once.  Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.
“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.
In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.
That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.
Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”
This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.
“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.
Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.
The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.
Like riding a bike
The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.
“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.
However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.
The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

Balancing old and new skills

To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.

A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.

The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.

“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceeding of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”

As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.

Noisy circuits

As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.

This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once.  Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.

“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.

In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.

That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.

Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”

This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.

“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.

Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.

The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.

Like riding a bike

The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.

“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.

However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.

The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

Filed under plasticity memory learning neurons neural circuits neuroscience science

103 notes

MR Spectroscopy Shows Differences in Brains of Preterm Infants

Premature birth appears to trigger developmental processes in the white matter of the brain that could put children at higher risk of problems later in life, according to a study being presented next week at the annual meeting of the Radiological Society of North America (RSNA).

image

Preterm infants—generally those born 23 to 36 weeks after conception, as opposed to the normal 37- to 42-week gestation—face an increased risk of behavioral problems, ranging from impulsiveness and distractibility to more serious conditions like autism and attention deficit hyperactivity disorder (ADHD).

"In the United States, we have approximately 500,000 preterm births a year," said Stefan Blüml, Ph.D., director of the New Imaging Technology Lab at Children’s Hospital Los Angeles and associate professor of research radiology at the University of Southern California in Los Angeles. "About 60,000 of these babies are at high risk for significant long-term problems, which means that this is a significant problem with enormous costs."

Dr. Blüml and colleagues have been studying preterm infants to learn more about how premature birth might cause changes in brain structure that may be associated with clinical problems observed later in life. Much of the focus has been on the brain’s white matter, which transmits signals and enables communication between different parts of the brain. While some white matter damage is readily apparent on structural magnetic resonance imaging (MRI), Dr. Blüml’s group has been using magnetic resonance spectroscopy (MRS) to look at differences on a microscopic level.

In this study, the researchers compared the concentrations of certain chemicals associated with mature white matter and gray matter in 51 full-term and 30 preterm infants. The study group had normal structural MRI findings, but MRS results showed significant differences in the biochemical maturation of white matter between the term and preterm infants, suggesting a disruption in the timing and synchronization of white and gray matter maturation. Gray matter is the part of the brain that processes and sends out signals.

"The road map of brain development is disturbed in these premature kids," Dr. Blüml said. "White matter development had an early start and was ‘out of sync’ with gray matter development."

This false start in white matter development is triggered by events after birth, according to Dr. Blüml.

"This timeline of events might be disturbed in premature kids because there are significant physiological switches at birth, as well as stimulatory events, that happen irrespective of gestational maturity of the newborn," he said. "The most apparent change is the amount of oxygen that is carried by the blood."

Dr. Blüml said that the amount of oxygen delivered to the fetus’s developing brain in utero is quite low, and our brains have evolved to optimize development in that low oxygen environment. However, when infants are born, they are quickly exposed to a much more oxygen-rich environment.

"This change may be something premature brains are not ready for," he said.

While this change may cause irregularities in white matter development, Dr. Blüml noted that the newborn brain has a remarkable capacity to adapt or even “re-wire” itself—a concept known as plasticity. Plasticity not only allows the brain to govern new skills over the course of development, like learning to walk and read, but could also make the brains of preterm infants and young children more responsive to therapeutic interventions, particularly if any abnormalities are identified early.

"Our research points to the need to better understand the impact of prematurity on the timing of critical maturational processes and to develop therapies aimed at regulating brain development," Dr. Blüml said.

(Source: www2.rsna.org)

Filed under preterm infants brain development white matter plasticity gray matter oxygen neuroscience science

191 notes

Study connects dots between genes and human behavior

Establishing links between genes, the brain and human behavior is a central issue in cognitive neuroscience research, but studying how genes influence cognitive abilities and behavior as the brain develops from childhood to adulthood has proven difficult.

Now, an international team of scientists has made inroads to understanding how genes influence brain structure and cognitive abilities and how neural circuits produce language.

The team studied individuals with a rare disorder known as Williams syndrome. By measuring neural activity in the brain associated with the distinct language skills and facial recognition abilities that are typical of the syndrome, they showed that Williams is due not to a single gene but to distinct subsets of genes, hinting that the syndrome is more complex than originally thought.

"Solutions to understanding the connections between genes, neural circuits and behavior are now emerging from a unique union of genetics and neuroscience," says Julie Korenberg, a University of Utah professor and an adjunct professor at the Salk Institute, who led the genetics aspects on the new study.

The study was led by Debra Mills, a professor of cognitive neuroscience at Bangor University in Wales. Ursula Bellugi, a professor at the Salk Institute for Biological Studies in La Jolla, was also integrally involved in the research.

Korenberg was convinced that with Mills’ approach of directly measuring the brain’s electrical firing they could solve the puzzle of precisely which genes were responsible for building the brain wiring underlying the different reaction to human faces in Williams syndrome.

"We also discovered," says Mills, "that in those with Williams syndrome, the brain processes language and faces abnormally from early childhood through middle age. This was a surprise because previous studies had suggested that part of the Williams brain functions normally in adulthood, with little understanding about how it developed."

The results of the study were published November 12, 2013 in Developmental Neuropsychology.

Williams syndrome is caused by the deletion of one of the two usual copies of approximately 25 genes from chromosome 7, resulting in mental impairment. Nearly everyone with the condition is missing these same genes, although a few rare individuals retain one or more genes that most people with Williams have lost. Korenberg was the early pioneer of studying these individuals with partial gene deletions as a way of gathering clues to the specific function of those genes and gene networks. The syndrome affects approximately 1 in 10,000 people around the world, including an estimated 20,000 to 30,000 individuals in the United States.

Although individuals with Williams experience developmental delays and learning disabilities, they are exceptionally sociable and possess remarkable verbal abilities and facial recognition skills in relation to their lower IQ. Bellugi has long observed that sociability also seems to drive language and has spent much of her career studying those with Williams syndrome.

"Williams offers us a window into how the brain works at many different levels," says Bellugi. "We have the tools to measure the different cognitive abilities associated with the syndrome, and thanks to Julie and Debbie we are now able to combine this with studies of the underlying genetic and neurological aspects."

Suspecting that specific genes might lie at the origins of brain plasticity, functional changes in the brain that occur with new knowledge or experiences, and that these genes might be linked to the unusual proficiencies of those with Williams, the team enrolled individuals of various ages in their study. They drew from children, adolescents and adults who all had the full genetic deletion for Williams syndrome and compared them with their non-affected peers. Their study is additionally significant for being one of the first to examine the brain structure and its functioning in children with Williams. And, as Korenberg predicted, a critical piece of the puzzle came from including in their study two adults with partial genetic deletions for Williams.

Using highly sensitive sensors to measure brain activity, the researchers, led by Mills, presented their study participants with both visual and auditory stimuli in the form of unfamiliar faces and spoken sentences. They charted the small changes in voltage generated by the areas of the brain responding to these stimuli, a process known as event-related potentials (ERPs). Mills was the first to publish studies on Williams syndrome using ERPs, developed the ERP markers for this study, and oversaw its design and analysis.

Mills identified ERP markers of brain plasticity in Williams syndrome in children and adults of varying ages and developmental stages. These findings are important because the brains of people with Williams are structured differently than those of people without the syndrome. In the Williams brain, the dorsal areas (along the back and top), which help control vision and spatial understanding, are undersized. The ventral areas (at the front and the bottom), which influence language, facial recognition, emotion and social drive, are relatively normal in size.

It was previously believed that in individuals with Williams, the ventral portion of the brain operated normally. What the team discovered, however, was that this area of the brain also processed information differently than those without the syndrome, and did so throughout development, from childhood to the adult years. This suggests that the brain was compensating in order to analyze information; in other words, it was exhibiting plasticity. Of additional importance, the distinct ERP markers identified by Mills are so characteristic of the different brain organization in Williams that this information alone is approximately 90 percent accurate when analyzing brain activity to identify someone with Williams syndrome.

Other key findings of the study resulted from comparing the ERPs of participants with full Williams deletion with those with partial genetic deletions. While psychological tests focused on facial recognition show no difference between these groups, the scientists found differences in these recognition abilities on the ERP measurements, which look directly at neural activity. Thus, the scientists were able to see how very slight genetic differences affected brain activity, which will allow them identify the roles of sub-sets of Williams genes in brain development and in adult facial recognition abilities.

By combining these one-in-a-million people with tools capable of directly measuring brain activity, the scientists now have the unprecedented opportunity to study the genetic underpinnings of mental disorders. The results of this study not only advance science’s understanding of the links between genes, the brain and behavior, but may lead to new insight into such disorders as autism, Down syndrome and schizophrenia.

"By greatly narrowing the specific genes involved in social disorders, our findings will help uncover targets for treatment and provide measures by which these and other treatments are successful in alleviating the desperation of autism, anxiety and other disorders," says Korenberg.

(Source: salk.edu)

Filed under williams syndrome neural activity brain activity plasticity genes brain development neuroscience science

782 notes

How video gaming can be beneficial for the brain
Video gaming causes increases in the brain regions responsible for spatial orientation, memory formation and strategic planning as well as fine motor skills. This has been shown in a new study conducted at the Max Planck Institute for Human Development and Charité University Medicine St. Hedwig-Krankenhaus. The positive effects of video gaming may also prove relevant in therapeutic interventions targeting psychiatric disorders.
In order to investigate how video games affect the brain, scientists in Berlin have asked adults to play the video game “Super Mario 64” over a period of two months for 30 minutes a day. A control group did not play video games. Brain volume was quantified using magnetic resonance imaging (MRI). In comparison to the control group the video gaming group showed increases of grey matter, in which the cell bodies of the nerve cells of the brain are situated. These plasticity effects were observed in the right hippocampus, right prefrontal cortex and the cerebellum. These brain regions are involved in functions such as spatial navigation, memory formation, strategic planning and fine motor skills of the hands. Most interestingly, these changes were more pronounced the more desire the participants reported to play the video game.

“While previous studies have shown differences in brain structure of video gamers, the present study can demonstrate the direct causal link between video gaming and a volumetric brain increase. This proves that specific brain regions can be trained by means of video games”, says study leader Simone Kühn, senior scientist at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. Therefore Simone Kühn and her colleagues assume that video games could be therapeutically useful for patients with mental disorders in which brain regions are altered or reduced in size, e.g. schizophrenia, post-traumatic stress disorder or neurodegenerative diseases such as Alzheimer’s dementia.
“Many patients will accept video games more readily than other medical interventions”, adds the psychiatrist Jürgen Gallinat, co-author of the study at Charité University Medicine St. Hedwig-Krankenhaus. Further studies to investigate the effects of video gaming in patients with mental health issues are planned. A study on the effects of video gaming in the treatment of post-traumatic stress disorder is currently ongoing.

How video gaming can be beneficial for the brain

Video gaming causes increases in the brain regions responsible for spatial orientation, memory formation and strategic planning as well as fine motor skills. This has been shown in a new study conducted at the Max Planck Institute for Human Development and Charité University Medicine St. Hedwig-Krankenhaus. The positive effects of video gaming may also prove relevant in therapeutic interventions targeting psychiatric disorders.

In order to investigate how video games affect the brain, scientists in Berlin have asked adults to play the video game “Super Mario 64” over a period of two months for 30 minutes a day. A control group did not play video games. Brain volume was quantified using magnetic resonance imaging (MRI). In comparison to the control group the video gaming group showed increases of grey matter, in which the cell bodies of the nerve cells of the brain are situated. These plasticity effects were observed in the right hippocampus, right prefrontal cortex and the cerebellum. These brain regions are involved in functions such as spatial navigation, memory formation, strategic planning and fine motor skills of the hands. Most interestingly, these changes were more pronounced the more desire the participants reported to play the video game.

“While previous studies have shown differences in brain structure of video gamers, the present study can demonstrate the direct causal link between video gaming and a volumetric brain increase. This proves that specific brain regions can be trained by means of video games”, says study leader Simone Kühn, senior scientist at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. Therefore Simone Kühn and her colleagues assume that video games could be therapeutically useful for patients with mental disorders in which brain regions are altered or reduced in size, e.g. schizophrenia, post-traumatic stress disorder or neurodegenerative diseases such as Alzheimer’s dementia.

“Many patients will accept video games more readily than other medical interventions”, adds the psychiatrist Jürgen Gallinat, co-author of the study at Charité University Medicine St. Hedwig-Krankenhaus. Further studies to investigate the effects of video gaming in patients with mental health issues are planned. A study on the effects of video gaming in the treatment of post-traumatic stress disorder is currently ongoing.

Filed under video gaming plasticity gray matter memory formation brain structure neuroscience science

free counters