Posts tagged science

Posts tagged science
Scientists pinpoint brain’s area for numeral recognition
Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”
Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.
“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.
“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”
The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.
The cluster Parvizi’s group identified consists of perhaps 1 to 2 million nerve cells in the inferior temporal gyrus, a superficial region of the outer cortex on the brain. The inferior temporal gyrus is already generally known to be involved in the processing of visual information.
The new study, published April 17 in the Journal of Neuroscience, builds on an earlier one in which volunteers had been challenged with math questions. “We had accumulated lots of data from that study about what parts of the brain become active when a person is focusing on arithmetic problems, but we were mostly looking elsewhere and hadn’t paid much attention to this area within the inferior temporal gyrus,” said Parvizi, who is senior author of the study.
Not, that is, until fourth-year medical student Jennifer Shum, who also is doing research in Parvizi’s lab, noticed that, among some subjects in the first study, a spot in the inferior temporal gyrus seemed to be substantially activated by math exercises. Charged with verifying that this observation was consistent from one patient to the next, Shum, the study’s lead author, reported that this was indeed the case. So, Parvizi’s team designed a new study to look into it further.
The new study relied on epileptic volunteers who, as a first step toward possible surgery to relieve unremitting seizures that weren’t responding to therapeutic drugs, had a small section of their skulls removed and electrodes applied directly to the brain’s surface. The procedure, which doesn’t destroy any brain tissue or disrupt the brain’s function, had been undertaken so that the patients could be monitored for several days to help attending neurologists find the exact location of their seizures’ origination points. While these patients are bedridden in the hospital for as much as a week of such monitoring, they are fully conscious, in no pain and, frankly, a bit bored.
Over time, Parvizi identified seven epilepsy patients with electrode coverage in or near the inferior temporal gyrus and got these patients’ consent to undergo about an hour’s worth of tests in which they would be shown images presented for very short intervals on a laptop computer screen, while activity in their brain regions covered by electrodes was recorded. Each electrode picked up activity from an area corresponding to about a half-million nerve cells (a drop in the bucket in comparison to the brain’s roughly 100 billion nerve cells).
To make sure that any numeral-responsive brain areas identified were really responding to numerals — and not just generic lines, angles and curves — these tests were carefully calibrated to distinguish brain responses to visual presentations of the classic numerals taught in Western schools, such as 3 or 50, as opposed to squiggly lines, letters of the alphabet, number-denoting words such as “three” or “fifty,” and symbols that in fact were also numerals but — because they were drawn from the Thai, Tibetan and Devanagari languages — were extremely unlikely to be recognized as such by this particular group of volunteers.
In the first test, subjects were shown series of single numerals and letters — along with false fonts, in which the component parts of numerals or letters had been scrambled but defining curves and angles were retained, and the foreign-number symbols just described. A second test, controlling for meaning and sound, included numerals and their spelled-out versions (for instance, “1” and “one,” or “3” and “three”) and other words with the same sound or a similar one (“won” and “tree,” respectively).
All of our brains are shaped slightly differently. But in almost the identical spot within each study subject’s brain, the investigators observed a significantly larger response to numerals than to similar-shaped stimuli, such as letters or scrambled letters and numerals, or to words that either meant the same as the numerals or sounded like them.
Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”
It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.
Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”
New model of how brain functions are organized may revolutionize stroke rehab
A new model of brain lateralization for movement could dramatically improve the future of rehabilitation for stroke patients, according to Penn State researcher Robert Sainburg, who proposed and confirmed the model through novel virtual reality and brain lesion experiments.
Since the 1860s, neuroscientists have known that the human brain is organized into two hemispheres, each of which is responsible for different functions. Known as neural lateralization, this functional division has significant implications for the control of movement and is familiar in the phenomenon of handedness.
Understanding the connections between neural lateralization and motor control is crucial to many applications, including the rehabilitation of stroke patients. While most people intuitively understand handedness, the neural foundations underlying motor asymmetry have until recently remained elusive, according to Sainburg, professor of kinesiology and neurology and participant in the neuroscience and physiology graduate programs at the University’s Huck Institutes of the Life Sciences.
Research by Sainburg and his colleagues in the Center for Motor Control and published in the journal Brain has revealed a new model of motor lateralization that accounts for the neural foundations of handedness. The discovery could fundamentally change the way post-stroke rehabilitation is designed.
"Each hemisphere of the brain is specialized for different aspects of motor control, and thus each arm is ‘dominant’ for different features of movement," said Sainburg. "The dominant arm is used for applying specific force sequences — such as when slicing a loaf of bread with a knife — and the other arm is used for impeding forces to maintain stable posture, such as holding the loaf of bread. Together these specialized control mechanisms are seamlessly integrated into every day activities.
"Our research has shown that this integration breaks down in neural disorders such as stroke, which produces different motor deficits depending on whether the right or left hemisphere has been damaged," Sainburg continued. "Traditionally, physical rehabilitation professionals have used the same protocols to practice movements of the paretic arm, regardless of the hemisphere that has been damaged. Our research shows that each arm should be treated for different control deficits, and it also indicates that therapists should directly retrain patients in how to use the two arms together in order to recover function."
In preparing to test their model, Sainburg and his team selected study participants from the New Mexico Veterans Administration Hospital and Penn State Milton S. Hershey Medical Center based on specific criteria in order to accurately distinguish the motor control mechanisms specific to each brain hemisphere. Participants were then asked to perform a series of tasks on a virtual reality interface, programmed and designed by Sainburg, which allowed the researchers to record detailed 3D position and motion data. The data for all the participants’ hand trajectories and final positions were then aggregated to compare the effects of left versus right hemisphere damage on different aspects of control.
"Our results indicated that while both groups of patients showed similar clinical impairment in the contralesional arm, this was produced by different motor control deficits," Sainburg said. "Right hemisphere damaged patients were able to make straight movements that were directed toward the targets, but were unable to stabilize their arms in the targets at the end of motion. In contrast, left hemisphere damaged patients were unable to make straight and efficient movements, but had no difficulty stabilizing their arms at the end of motion. These results confirmed that each hemisphere contributes unique control to its contralesional arm, verifying why our arms seem different when we use them for the same tasks."
Results mirror those of Sainburg’s prior studies of motor deficits in unilateral stroke patients, focused on the ipsilesional arm, which formed the basis for his model of lateralization.
"Because both arms in stroke patients show motor deficits that are specific to the hemisphere that was damaged, we have concluded that the left arm is not simply controlled with the right hemisphere and vice versa," Sainburg said. "This is a revolutionarily new perspective on sensorimotor control: each hemisphere contributes different control mechanisms to the coordination of both arms, regardless of which arm is considered dominant."
Sainburg and his colleagues are currently designing follow-up studies that will aid the development of new rehabilitation protocols addressing the specific motor deficits associated with each hemisphere.
Researchers identify pathway that may protect against cocaine addiction
A study by researchers at the National Institutes of Health gives insight into changes in the reward circuitry of the brain that may provide resistance against cocaine addiction. Scientists found that strengthening signaling along a neural pathway that runs through the nucleus accumbens — a region of the brain involved in motivation, pleasure, and addiction — can reduce cocaine-seeking behavior in mice.
Research suggests that about 1 in 5 people who use cocaine will become addicted, but it remains unclear why certain people are more vulnerable to drug addiction than others.
“A key step in understanding addiction and advancing treatment is to identify the differences in brain connectivity between subjects that compulsively take cocaine and those who do not,” said Ken Warren, Ph.D., acting director of the National Institute on Alcohol Abuse and Alcoholism (NIAAA). Researchers at NIAAA, part of NIH, conducted the study.
“Until now, most efforts have focused on finding traits associated with vulnerability to develop compulsive cocaine use. However, identifying mechanisms that promote resilience may prove to have more therapeutic value,” said the paper’s senior author, Veronica Alvarez, Ph.D., acting chief of the Section on Neuronal Structure in the NIAAA Laboratory for Integrative Neuroscience. The study is available on the Nature Neuroscience website ahead of print.
In the study, mice were conditioned to receive an intravenous dose of cocaine each time they poked their nose into a hole in their enclosure. Cocaine was then made unavailable for periods of time during the day. Some of the mice would stop seeking the drug once it was removed while others would obsessively continue to poke the hole in an effort to obtain the drug.
Mice that quickly stopped seeking the drug were found to have stronger connections along the indirect pathway — a neural tract that forms indirect projections into the midbrain and contains cells called medium spiny neurons expressing dopamine D2 receptors (D2-MSNs). A parallel pathway — known as the direct pathway — forms direct projections into the midbrain neurons and contains medium spiny neurons expressing D1 receptors (D1-MSNs). These two pathways are thought to work together in complementary but sometimes opposing ways to affect behavior.
"We were very surprised by the results of the study because we were originally looking for vulnerability factors for developing compulsive drug use,” said Dr. Alvarez. “Instead, we found changes that only happened in subjects that show a resilience to becoming compulsive drug users. Resilient mice had a strong inhibitory circuit that allowed them to exert better control over their drug intake."
To test this observation, researchers used lasers to activate individual neurons, and found that stimulating D2-MSNs in the nucleus accumbens decreased cocaine seeking in the mice. Blocking D2-MSN signaling with a chemical process increased motivation to obtain cocaine.
“This research advances our understanding of how the recruitment, activation and the interaction among brain circuits can either restrain or increase motivation to take drugs,” said David Shurtleff, Ph.D. acting deputy director of the National Institute on Drug Abuse.
Previous studies have shown that people with lower levels of dopamine D2 receptors in the striatum, a brain region associated with reward and working memory, are more likely to develop compulsive behaviors toward stimulant drugs.
Dopamine is a key neurotransmitter involved in reward-based learning and addiction. Cocaine disrupts communication between neurons at the synapse, the small junction between nerve cells, by blocking the reabsorption of dopamine into the transmitting neuron. As a result, dopamine continues to stimulate the receiving neuron, causing feelings of alertness and euphoria.

Researchers find out why some stress is good for you
Overworked and stressed out? Look on the bright side. Some stress is good for you.
“You always think about stress as a really bad thing, but it’s not,” said Daniela Kaufer, associate professor of integrative biology at the University of California, Berkeley. “Some amounts of stress are good to push you just to the level of optimal alertness, behavioral and cognitive performance.”
New research by Kaufer and UC Berkeley post-doctoral fellow Elizabeth Kirby has uncovered exactly how acute stress – short-lived, not chronic – primes the brain for improved performance.
In studies on rats, they found that significant, but brief stressful events caused stem cells in their brains to proliferate into new nerve cells that, when mature two weeks later, improved the rats’ mental performance.
“I think intermittent stressful events are probably what keeps the brain more alert, and you perform better when you are alert,” she said.
Kaufer, Kirby and their colleagues in UC Berkeley’s Helen Wills Neuroscience Institute describe their results in a paper published April 16 in the new open access online journal eLife.
The UC Berkeley researchers’ findings, “in general, reinforce the notion that stress hormones help an animal adapt – after all, remembering the place where something stressful happened is beneficial to deal with future situations in the same place,” said Bruce McEwen, head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology at The Rockefeller University, who was not involved in the study.
Kaufer is especially interested in how both acute and chronic stress affect memory, and since the brain’s hippocampus is critical to memory, she and her colleagues focused on the effects of stress on neural stem cells in the hippocampus of the adult rat brain. Neural stem cells are a sort of generic or progenitor brain cell that, depending on chemical triggers, can mature into neurons, astrocytes or other cells in the brain. The dentate gyrus of the hippocampus is one of only two areas in the brain that generate new brain cells in adults, and is highly sensitive to glucocorticoid stress hormones, Kaufer said.
Much research has demonstrated that chronic stress elevates levels of glucocorticoid stress hormones, which suppresses the production of new neurons in the hippocampus, impairing memory. This is in addition to the effect that chronically elevated levels of stress hormones have on the entire body, such as increasing the risk of chronic obesity, heart disease and depression.
Less is known about the effects of acute stress, Kaufer said, and studies have been conflicting.
To clear up the confusion, Kirby subjected rats to what, to them, is acute but short-lived stress – immobilization in their cages for a few hours. This led to stress hormone (corticosterone) levels as high as those from chronic stress, though for only a few hours. The stress doubled the proliferation of new brain cells in the hippocampus, specifically in the dorsal dentate gyrus.
Kirby discovered that the stressed rats performed better on a memory test two weeks after the stressful event, but not two days after the event. Using special cell labeling techniques, the researchers established that the new nerve cells triggered by the acute stress were the same ones involved in learning new tasks two weeks later.
“In terms of survival, the nerve cell proliferation doesn’t help you immediately after the stress, because it takes time for the cells to become mature, functioning neurons,” Kaufer said. “But in the natural environment, where acute stress happens on a regular basis, it will keep the animal more alert, more attuned to the environment and to what actually is a threat or not a threat.”
They also found that nerve cell proliferation after acute stress was triggered by the release of a protein, fibroblast growth factor 2 (FGF2), by astrocytes — brain cells formerly thought of as support cells, but that now appear to play a more critical role in regulating neurons.
“The FGF2 involvement is interesting, because FGF2 deficiency is associated with depressive-like behaviors in animals and is linked to depression in humans,” McEwen said.
Kaufer noted that exposure to acute, intense stress can sometimes be harmful, leading, for example, to post-traumatic stress disorder. Further research could help to identify the factors that determine whether a response to stress is good or bad.
“I think the ultimate message is an optimistic one,” she concluded. “Stress can be something that makes you better, but it is a question of how much, how long and how you interpret or perceive it.”
Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep
A new study that examined how the brain learns and retains motor skills provides insight into musical skill.
Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.
The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.
The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.
“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”
Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.
“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”
The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.
The findings may in the future guide the teaching of music, Allen said.
“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”
The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.
New study builds on earlier brain research in rats and humans
Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.
Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.
Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.
“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”
Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.
Impact of sleep on learning for musicians
For Allen’s study, 60 undergraduate and graduate music majors participated in the research.
Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.
The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.
Musicians who learned a single melody showed performance gains on the test the next day.
Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.
Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.
“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen

Congenitally absent optic chiasm: Making sense of visual pathways
One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.
When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.
By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.
The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.
It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.
The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Fainting May Run in Families While Triggers May Not
New research suggests that fainting may be genetic and, in some families, only one gene may be responsible. However, a predisposition to certain triggers, such as emotional distress or the sight of blood, may not be inherited. The study is published in the April 16, 2013, print issue of Neurology®, the medical journal of the American Academy of Neurology. Fainting, also called vasovagal syncope, is a brief loss of consciousness when your body reacts to certain triggers. It affects at least one out of four people.
“Our study strengthens the evidence that fainting may be commonly genetic,” said study author Samuel F. Berkovic, MD, FRS, with the University of Melbourne in Victoria, Australia, and a member of the American Academy of Neurology. “Our hope is to uncover the mystery of this phenomenon so that we can recognize the risk or reduce the occurrence in people as fainting may be a safety issue.”
Researchers interviewed 44 families with a history of fainting and reviewed their medical records. Of those, six families had a large number of affected people, suggesting that a single gene was running through the family. The first family consisted of 30 affected people over three generations with an average fainting onset of eight to nine years. The other families were made up of four to 14 affected family members. Affected family members reported typical triggers, such as the sight of blood, injury, medical procedures, prolonged standing, pain and frightening thoughts. However, the triggers varied greatly within the families.
Genotyping of the largest family showed significant linkage to a specific region on chromosome 15, known as 15q26. Linkage to this region was excluded in two medium-sized families but not in the two smaller families.
(Image: Fotolia)
Researchers untangle molecular pathology of giant axonal neuropathy
Giant axonal neuropathy (GAN) is a rare genetic disorder that causes central and peripheral nervous system dysfunction. GAN is known to be caused by mutations in the gigaxonin gene and is characterized by tangling and aggregation of neural projections, but the mechanistic link between the genetic mutation and the effects on neurons is unclear. In this issue of the Journal of Clinical Investigation, Robert Goldman and colleagues at Northwestern University uncover how mutations in gigaxonin contribute to neural aggregation.They demonstrated that gigaxonin regulates the degradation of neurofilament proteins, which help to guide outgrowth and morphology of neural projections. Loss of gigaxonin in either GAN patient cells or transgenic mice increased levels of neurofilament proteins, causing tangling and aggregation of neural projections. Importantly, expression of gigaxonin allowed for clearance of neurofilament proteins in neurons. These findings demonstrate that mutations in gigaxonin cause accumulation of neurofilament proteins and shed light on the molecular pathology of GAN.
A brain-training task that increases the number of items an individual can remember over a short period of time may boost performance in other problem-solving tasks by enhancing communication between different brain areas. The new study being presented this week in San Francisco is one of a growing number of experiments on how working-memory training can measurably improve a range of skills – from multiplying in your head to reading a complex paragraph.

(Image: Nelson Marques)
“Working memory is believed to be a core cognitive function on which many types of high-level cognition rely, including language comprehension and production, problem solving, and decision making,” says Brad Postle of the University of Wisconsin-Madison, who is co-chairing a session on working-memory training at the Cognitive Neuroscience Society (CNS) annual meeting today in San Francisco. Work by various neuroscientists to document the brain’s “plasticity” – changes brought about by experience – along with technical advances in using electromagnetic techniques to stimulate the brain and measure changes, have enabled researchers to explore the potential for working-memory training like never before, he says.
The cornerstone brain-training exercise in this field has been the “n-back” task, a challenging working memory task that requires an individual to mentally juggle several items simultaneously. Participants must remember both the recent stimuli and an increasing number of stimuli before it (e.g., the stimulus “1-back,” “2-back,” etc). These tasks can be adapted to also include an audio component or to remember more than one trait about the stimuli over time – for example, both the color and location of a shape.
Through a number of experiments over the past decade, Susanne Jaeggi of the University of Maryland, College Park, and others have found that participants who train with n-back tasks over the course of approximately a month for about 20 minutes per day not only get better at the n-back task itself, but also experience “transfer” to other cognitive tasks on which they did not train. “The effects generalize to important domains such as attentional control, reasoning, reading, or mathematical skills,” Jaeggi says. “Many of these improvements remain over the course of several months, suggesting that the benefits of the training are long lasting.”
As yet unresolved and controversial, however, has been understanding which factors determine whether working-memory training will generalize to other domains, as well as how the brain changes in response to the training. Work by Postle’s group using a new technique of applying electromagnetic stimulation on the brains of people undergoing working-memory training addresses some of these questions.
Training increases connectivity
Bornali Kundu of the University of Wisconsin-Madison, who works in Postle’s laboratory, used transcranial magnetic stimulation (TMS) with electroencephalography (EEG) to measure activity in specific brain circuits before and after training with an n-back task. “Our main finding was that training on the n-back task increased the number of items an individual could remember over a short period of time,” explains Kundu, who is presenting these new results today. “This increase in short-term memory performance was associated with enhanced communication between distant brain areas, in particular between the parietal and frontal brain areas.”
In the n-back task, Kundu’s team presented stimuli one-at-a-time on a computer screen and asked participants to decide if the current stimulus matched both the color and location of the stimulus presented a certain number of presentations previously. The color varied among seven primary colors, and the location varied among eight possible positions arranged in a square formation. The control task was playing the video game Tetris, which involves moving colored shapes to different locations, but does not require participants to remember anything. Before and after the training, researchers administered a range of cognitive tasks on which subjects did not receive training, and simultaneously delivered TMS while recording EEG, to measure communication between brain areas during task performance.
After practicing the n-back task for 5 hours a day and 5 days per week over 5 weeks, subjects were able to remember more items over short periods of time. Importantly, for those whose working memory improved, communication between the dorsolateral prefrontal cortex (DLPFC) and parietal cortex also improved. “This is in comparison to the control group, who showed no such differences in neural communication after practicing Tetris for 5 weeks,” Kundu says.
Working-memory training also produced improvement on cognitive tasks for which participants were not trained that are also believed to rely on communication between the parietal cortex and DLPFC. For two of these tasks – the ability to detect a change in a briefly presented array of squares, and the ability to detect a red letter “C” embedded in a field of distracting stimuli of rotated red “C”s and blue “C”s – those who had trained in the n-back test also showed a decrease in task-related EEG. The training exercise had registered a similar decrease. “The overall picture seems to be that the extent of transfer of training to untrained tasks depends on the overlap of neural circuits recruited by the two,” Kundu says.
Developing future therapies
Moving forward, many cognitive neuroscientists are working to see how working-memory training may specifically help clinical populations, such as patients with ADHD. “If we can learn the ‘rules’ that govern how, why, and when cognitive training can produce improvements that generalize to untrained tasks, it may be that therapies can be developed for patients suffering from neurological or psychiatric disease,” Postle says.
Both Jaeggi’s team, as well as Torkel Klingberg of the Karolinska Institute in Sweden, who is also presenting at the symposium today in San Francisco, have had success with such training for children with ADHD, decreasing the symptoms of inattention. “Here, the reason working-memory training may transfer to tests of fluid intelligence, as well as to a reduction in ADHD-associated hyperactivity symptoms, may be because both of those complex behaviors use some of the same brain circuits also used in performing the working-memory training tasks,” Kundu says.
“Individual differences in working memory performance have been related to individual differences in numerous real world skills such as reading comprehension, performance on standardized tests, and much more,” she adds. “I would not expect the same sorts of transfer effects that have been seen with working-memory training to happen if an individual practiced a task that used a minimally overlapping network, such as, for example, shooting three-pointers – which presumably uses different brain areas like primary and secondary motor cortex and the cerebellum.”
Jaeggi says that it is important to understand that cognitive abilities are not as unchangeable as some might think. “Even though there is certainly a hereditary component to mental abilities, that does not mean that there are not also components that are malleable and respond to experience and practice,” she says. “Whereas we try to strengthen participants’ working memory skills in our research, there are other routes that are possible as well, such as for example physical or musical training, meditation, nutrition, or even sleep.”
Despite all the promising research, Jaeggi says, researchers still need to understand many aspects of this work, such as “individual differences that influence training and transfer effects, the question of how long the effects last, and whether and how the effects translate into more real-world settings and ultimately, academic achievement.”
(Source: cogneurosociety.org)
Scientists learn what makes nerve cells so strong
How do nerve cells — which can each be up to three feet long in humans — keep from rupturing or falling apart?
Axons, the long, cable-like projections on neurons, are made stronger by a unique modification of the common molecular building block of the cell skeleton. The finding, which may help guide the search for treatments for neurodegenerative diseases, was reported in the April 10 issue of Neuron by researchers at the University of Illinois at Chicago College of Medicine.
Microtubules are long, hollow cylinders that are a component of the cytoskeleton in all cells of the body. They also support transport of molecules within the cell and facilitate growth. They are made up of polymers of a building-block substance called tubulin.
“Except for neurons, cells’ microtubules are in constant dynamic flux — being taking apart and rebuilt,” says Scott Brady, professor and head of anatomy and cell biology at UIC and principal investigator on the study. But only neurons grow so long, he said, and once created they must endure throughout a person’s life, as much as 80 to 100 years. The microtubules of neurons are able to withstand laboratory conditions that cause other cells’ microtubules to break apart.
Brady had been able to show some time ago that the neuron’s stability depended on a modification of tubulin.
“But when we tried to figure out what the modification was, we didn’t have the tools,” he said.
Yuyu Song, a former graduate student in Brady’s lab and the first author of the study, took up the question. “It was like a detective story with many possibilities that had to be ruled out one by one,” she said. Song, who is now a post-doctoral fellow at Howard Hughes Medical Institute at Yale School of Medicine, used a variety of methods to determine the nature of the modification and where it occurs.
She found that tubulin is modified by the chemical bonding of polyamines, positively charged molecules, at sites that might otherwise be chinks where tubulin could be broken down, causing the microtubules to fall apart. She was also able to show that the enzyme transglutaminase was responsible for adding the protective polyamines.
The blocking of a vulnerable site on tubulin would explain the extraordinary stability of neuron microtubules, said Brady. However, convincing others required the “thorough and elegant work” that Song brought to it, he said. “It’s such a radical finding that we needed to show all the key steps along the way.”
The authors also note that increased microtubule stability correlates with decreased neuronal plasticity — and both occur in the process of aging and in some neurodegenerative diseases. Continued research, they say, may help identify novel therapeutic approaches to prevent neurodegeneration or allow regeneration.