Results also partly explain why the 2009 swine flu virus, and a vaccine against it, led to spikes in the sleep disorder.
As the H1N1 swine flu pandemic swept the world in 2009, China saw a spike in cases of narcolepsy — a mysterious disorder that involves sudden, uncontrollable sleepiness. Meanwhile, in Europe, around 1 in 15,000 children who were given Pandemrix — a now-defunct flu vaccine that contained fragments of the pandemic virus — also developed narcolepsy, a chronic disease.

Immunologist Elizabeth Mellins and narcolepsy researcher Emmanuel Mignot at Stanford University School of Medicine in California and their collaborators have now partly solved the mystery behind these events, while also confirming a longstanding hypothesis that narcolepsy is an autoimmune disease, in which the immune system attacks healthy cells.
Narcolepsy is mostly caused by the gradual loss of neurons that produce hypocretin, a hormone that keeps us awake. Many scientists had suspected that the immune system was responsible, but the Stanford team has found the first direct evidence: a special group of CD4+ T cells (a type of immune cell) that targets hypocretin and is found only in people with narcolepsy.
“Up till now, the idea that narcolepsy was an autoimmune disorder was a very compelling hypothesis, but this is the first direct evidence of autoimmunity,” says Mellins. “I think these cells are a smoking gun.” The study is published today in Science Translational Medicine.
Thomas Scammell, a neurologist at Harvard Medical School in Boston, Massachusetts, says that the results are welcome after “years of modest disappointment”, marked by many failures to find antibodies made by a person’s body against their own hypocretin. “It’s one of the biggest things to happen in the narcolepsy field for some time.”
Loose ends
It is not clear why some people make these T cells and others do not, but genetics may play a part. In earlier work, Mignot showed that 98% of people with narcolepsy have a variant of the gene HLA that is found in only 25% of the general population.
Environmental factors, such as infections, probably matter too. Mellins’ working model is that narcolepsy happens when people with a genetic predisposition, which involves having several narcolepsy-related gene variants, encounter an environmental factor that mimics hypocretin, triggering a response from the immune system. The 2009 H1N1 virus was one such trigger: the team found that these same special CD4+ T cells also recognize a protein from the pandemic H1N1 virus.
Narcolepsy of course was around long before the 2009 pandemic. And since new cases of the disease tend to arise right after winter — following the seasonal peak in flu — it’s possible that other strains or even other viruses are involved, too.
But the results do not fully explain the Pandemrix mystery, because other flu vaccines contained the same proteins but did not lead to a spike in narcolepsy cases. Regardless, Mellins says that it should be possible to avoid repeating the same mistake by ensuring that future flu vaccines do not contain components that resemble hypocretin.
Another loose end is that “they don’t show how these T cells are actually killing the hypocretin neurons”, adds Scammell. “It’s like a murder mystery and we don’t know who the real killer is.” He thinks that it is unlikely that the T cells are the true culprits; instead, they could be acting through an intermediary, or might merely be a symptom of some other destructive event.
“The results are very important, but they need to do a replication study in a large group of patients and controls,” says Gert Lammers, a neurologist at Leiden University Medical Center in the Netherlands and president of the European Narcolepsy Network. “If the findings are confirmed, the first important spin-off might be the development of a new diagnostic test.”
Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related to tone color and rhythm of the music change over time, and recognizes which piece of music is being listened to. The method also allows pinpointing the areas in the brain that are most crucial for the processing of music. The study was published in the journal NeuroImage.

Using functional magnetic resonance imaging (fMRI), the research team at the Finnish Centre of Excellence in Interdisciplinary Music Research in the Universities of Jyväskylä and Helsinki, and the Center for Functionally Integrative Neuroscience in Aarhus University, Denmark, recorded the brain responses of participants while they were listening to a 16-minute excerpt of the album Abbey Road by the Beatles. Following this, they used computational algorithms to extract a collection of musical features from the musical recording. Subsequently, they employed a collection of machine-learning methods to train a computer model that predicts how the features of the music change over time. Finally, they develop a classifier that predicts which part of the music the participant was listening to at each time.
The researchers found that most of the musical features included in the study could be reliably predicted from the brain data. They also found that the piece being listened to could be predicted significantly better than chance. Fairly large differences were however found between participants in terms of the prediction accuracy. An interesting finding was that areas outside of the auditory cortex, including motor, limbic, and frontal areas, had to be included in the models to obtain reliable predictions, providing thus evidence for the important role of these areas in the processing of musical features.
"We believe that decoding provides a method that complements other existing methods to obtain more reliable information about the complex processing of music in the brain", says Professor Petri Toiviainen from the University of Jyväskylä. "Our results provide additional evidence for the important involvement of emotional and motor areas in music processing."
Learning requires constant reconfiguration of the connections between nerve cells. Two new studies now yield new insights into the molecular mechanisms that underlie the learning process.

Learning and memory are made possible by the incessant reorganization of nerve connections in the brain. Both processes are based on targeted modifications of the functional interfaces between nerve cells – the so-called synapses – which alter their form, molecular composition and functional properties. In effect, connections between cells that are frequently co-activated together are progressively altered so that they respond to subsequent signals more rapidly and more strongly. This way, information can be encoded in patterns of synaptic activity and promptly recalled when needed. The converse is also true: learned behaviors can be lost by disuse, because inactive synapses are themselves less likely to transmit an incoming impulse, leading to the decay of such connections.
How exactly an individual synapse is altered without simultaneously affecting nearby nerve cells or other synapses on the same cell is a question that is central to Michael Kiebler’s research. Kiebler, a biochemist, holds the Chair of Cell Biology in the Faculty of Medicine at LMU. “It is now clear that the changes take place in the cell that is stimulated by synaptic input – the post-synaptic cell – and in particular in its so-called dendritic spines,” he says, “and particles that are known as “neuronal RNA granules” deliver mRNA molecules to these sites“. These mRNAs represent the blueprints for the synthesis of the proteins responsible for reconfiguring the synapses. Kiebler‘s team has developed a model, which postulates that these granules migrate from dendrite to dendrite, and release their mRNAs specifically at sites that are repeatedly activated. This would ensure that the relevant proteins are synthesized only where they are needed within the cell.
In spite of the potential significance of the model, the molecular mechanisms required for its realization have remained obscure. mRNA-binding proteins, including Staufen2 (Stau2) and Barentsz, are essential components of the granules, and Kiebler’s team, in collaboration with Giulio Superti-Furga’s group (CeMM, Vienna), have now used specific antibodies to isolate and characterize neuronal granules that contain either Stau2 or Barentsz.
Surprising diversity
It has generally been assumed that all neuronal RNA granules have essentially similar compositions. However, the new findings indicate that this is not the case. A comparison between Stau2- and Barentsz-containing granules reveals that they differ in about two-thirds of their proteins. “This suggests that the RNA granules are highly heterogeneous and dynamic in their composition,” says Kiebler. “And that makes sense to me, because it would mean that the granules can perform different functions depending on which mRNAs they carry.” Furthermore, the researchers have shown that the granules contain virtually none of the factors known to promote the translation of mRNAs into proteins. On the contrary, they include many molecules that repress protein synthesis. This in turn implies that the process of mRNA transport is uncoupled from the subsequent production of the proteins they encode.
In a complementary study, Kiebler’s team also characterized the mRNA cargoes associated with the granules. “Until now, none of the RNA molecules present in Stau2-containing granules in mammalian nerve cells had been defined, but we have now been able to identify many specific mRNAs,” Kiebler explains. Further experiments revealed that Stau2 stabilizes the mRNAs, allowing them to be used more often for the production of proteins. Moreover, the researchers have shown that specialized structures within these mRNAs, called “Staufen-Recognized Structures” (SRS), are essential for their recognition and stabilization by Stau2. “This allows us to propose a molecular mechanism for RNA recognition for the first time,” says Kiebler.
Taken together, the two new papers (1, 2) provide novel insights into the molecular mechanisms that underlie learning and memory. The scientists now want to dissect out the details in future studies. “In the long term, we are particularly interested in the question of how an activated synapse can alter the state of the granules and induce the production of protein,” Kiebler notes. It is becoming increasingly clear that RNA-binding proteins play essential roles in nerve cells. Disruption of their action can lead to neurodegenerative diseases and neurological dysfunction. Clearly, not only classical conditions such as Alzheimer‘s or Parkinson’s disease, in which RNA-binding proteins are always involved, but also cognitive defects or age-associated impairment of learning ability must be viewed in this context,” Kiebler concludes.
Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.
The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”
“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”
Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.
“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.
“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.
“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”
While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said.
“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.
Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.
Researchers have discovered a cause of aging in mammals that may be reversible.

The essence of this finding is a series of molecular events that enable communication inside cells between the nucleus and mitochondria. As communication breaks down, aging accelerates. By administering a molecule naturally produced by the human body, scientists restored the communication network in older mice. Subsequent tissue samples showed key biological hallmarks that were comparable to those of much younger animals.
“The aging process we discovered is like a married couple—when they are young, they communicate well, but over time, living in close quarters for many years, communication breaks down,” said Harvard Medical School Professor of Genetics David Sinclair, senior author on the study. “And just like with a couple, restoring communication solved the problem.”
This study was a joint project between Harvard Medical School, the National Institute on Aging, and the University of New South Wales, Sydney, Australia, where Sinclair also holds a position.
The findings are published Dec. 19 in Cell.
Communication breakdown
Mitochondria are often referred to as the cell’s “powerhouse,” generating chemical energy to carry out essential biological functions. These self-contained organelles, which live inside our cells and house their own small genomes, have long been identified as key biological players in aging. As they become increasingly dysfunctional overtime, many age-related conditions such as Alzheimer’s disease and diabetes gradually set in.
Researchers have generally been skeptical of the idea that aging can be reversed, due mainly to the prevailing theory that age-related ills are the result of mutations in mitochondrial DNA—and mutations cannot be reversed.
Sinclair and his group have been studying the fundamental science of aging—which is broadly defined as the gradual decline in function with time—for many years, primarily focusing on a group of genes called sirtuins. Previous studies from his lab showed that one of these genes, SIRT1, was activated by the compound resveratrol, which is found in grapes, red wine and certain nuts.

Ana Gomes, a postdoctoral scientist in the Sinclair lab, had been studying mice in which this SIRT1 gene had been removed. While they accurately predicted that these mice would show signs of aging, including mitochondrial dysfunction, the researchers were surprised to find that most mitochondrial proteins coming from the cell’s nucleus were at normal levels; only those encoded by the mitochondrial genome were reduced.
“This was at odds with what the literature suggested,” said Gomes.
As Gomes and her colleagues investigated potential causes for this, they discovered an intricate cascade of events that begins with a chemical called NAD and concludes with a key molecule that shuttles information and coordinates activities between the cell’s nuclear genome and the mitochondrial genome. Cells stay healthy as long as coordination between the genomes remains fluid. SIRT1’s role is intermediary, akin to a security guard; it assures that a meddlesome molecule called HIF-1 does not interfere with communication.
For reasons still unclear, as we age, levels of the initial chemical NAD decline. Without sufficient NAD, SIRT1 loses its ability to keep tabs on HIF-1. Levels of HIF-1 escalate and begin wreaking havoc on the otherwise smooth cross-genome communication. Over time, the research team found, this loss of communication reduces the cell’s ability to make energy, and signs of aging and disease become apparent.
“This particular component of the aging process had never before been described,” said Gomes.
While the breakdown of this process causes a rapid decline in mitochondrial function, other signs of aging take longer to occur. Gomes found that by administering an endogenous compound that cells transform into NAD, she could repair the broken network and rapidly restore communication and mitochondrial function. If the compound was given early enough—prior to excessive mutation accumulation—within days, some aspects of the aging process could be reversed.

Cancer connection
Examining muscle from two-year-old mice that had been given the NAD-producing compound for just one week, the researchers looked for indicators of insulin resistance, inflammation and muscle wasting. In all three instances, tissue from the mice resembled that of six-month-old mice. In human years, this would be like a 60-year-old converting to a 20-year-old in these specific areas.
One particularly important aspect of this finding involvesHIF-1. More than just an intrusive molecule that foils communication, HIF-1 normally switches on when the body is deprived of oxygen. Otherwise, it remains silent. Cancer, however, is known to activate and hijack HIF-1. Researchers have been investigating the precise role HIF-1 plays in cancer growth.
“It’s certainly significant to find that a molecule that switches on in many cancers also switches on during aging,” said Gomes. “We’re starting to see now that the physiology of cancer is in certain ways similar to the physiology of aging. Perhaps this can explain why the greatest risk of cancer is age.”
“There’s clearly much more work to be done here, but if these results stand, then certain aspects of aging may be reversible if caught early,” said Sinclair.
The researchers are now looking at the longer-term outcomes of the NAD-producing compound in mice and how it affects the mouse as a whole. They are also exploring whether the compound can be used to safely treat rare mitochondrial diseases or more common diseases such as Type 1 and Type 2 diabetes. Longer term, Sinclair plans to test if the compound will give mice a healthier, longer life.
Newcastle University scientists have discovered that as the brain re-organises connections throughout our life, the process begins earlier in girls which may explain why they mature faster during the teenage years.

As we grow older, our brains undergo a major reorganisation reducing the connections in the brain. Studying people up to the age of 40, scientists led by Dr Marcus Kaiser and Ms Sol Lim at Newcastle University found that while overall connections in the brain get streamlined, long-distance connections that are crucial for integrating information are preserved.
The researchers suspect this newly-discovered selective process might explain why brain function does not deteriorate – and indeed improves –during this pruning of the network. Interestingly, they also found that these changes occurred earlier in females than in males.
Explaining the work which is being published in Cerebral Cortex, Dr Kaiser, Reader in Neuroinformatics at Newcastle University, says: “Long-distance connections are difficult to establish and maintain but are crucial for fast and efficient processing. If you think about a social network, nearby friends might give you very similar information – you might hear the same news from different people. People from different cities or countries are more likely to give you novel information. In the same way, some information flow within a brain module might be redundant whereas information from other modules, say integrating the optical information about a face with the acoustic information of a voice is vital in making sense of the outside world.”
Brain “pruned”
The researchers at Newcastle, Glasgow and Seoul Universities evaluated the scans of 121 healthy participants between the ages of 4 and 40 years as this is where the major connectivity changes can be seen during this period of maturation and improvement in the brain. The work is part of the EPSRC-funded Human Green Brain project which examines human brain development.
Using a non-invasive technique called diffusion tensor imaging – a special measurement protocol for Magnetic Resonance Imaging (MRI) scanners – they demonstrated that fibres are overall getting pruned that period.
However, they found that not all projections (long-range connections) between brain regions are affected to the same extent; changes were influenced differently depending on the types of connections.
Projections that are preserved were short-cuts that quickly link different processing modules, e.g. for vision and sound, and allow fast information transfer and synchronous processing. Changes in these connections have been found in many developmental brain disorders including autism, epilepsy and schizophrenia.
The researchers have demonstrated for the first time that the loss of white matter fibres between brain regions is a highly selective process – a phenomenon they call preferential detachment. They show that connections between distant brain regions, between brain hemispheres, and between processing modules lose fewer nerve fibres during brain maturation than expected. The researchers say this may explain how we retain a stable brain network during brain maturation.
Commenting on the fact that these changes occurred earlier in females than males, Ms Sol Lim explains: “The loss of connectivity during brain development can actually help to improve brain function by reorganizing the network more efficiently. Say instead of talking to many people at random, asking a couple of people who have lived in the area for a long time is the most efficient way to know your way. In a similar way, reducing some projections in the brain helps to focus on essential information.”
Measuring changes in certain proteins — called biomarkers — in people with amyotrophic lateral sclerosis may better predict the progression of the disease, according to scientists at Penn State College of Medicine.
ALS is often referred to as Lou Gehrig’s disease, is a neurological disease in which the brain loses its ability to control movement as motor neurons degenerate. The course of the disease varies, with survival ranging from months to decades.
"The cause of most cases of ALS remains unknown," said James Connor, Distinguished Professor of Neurosurgery, Neural and Behavioral Sciences and Pediatrics. "Although several genetic and environmental factors have been identified, each accounts for only a fraction of the total cases of ALS."
This clinical variation in patients presents challenges in terms of managing the disease and developing new treatments. Finding relevant biomarkers, which are objective measures that reflect changes in biological processes or reactions to treatments, may help address these challenges.
The project was led by Xiaowei Su, an M.D./ Ph.D. student in Connor’s laboratory, in collaboration with Zachary Simmons, director of the Penn State Hershey ALS Clinic and Research Center. Su studied plasma and cerebrospinal fluid samples previously collected from patients undergoing diagnostic evaluation, who were later identified as having ALS. Analysis shows that looking at multiple biomarkers to predict progression is not only mathematically possible, it improves upon methods using single biomarkers.
Statistical models analyzing plasma had reasonable ability to predict total disease duration and used seven relevant biomarkers. For example, higher levels of the protein IL-10 predict a longer disease duration. IL-10 is involved with anti-inflammation, suggesting that lower levels of inflammation are associated with a longer disease duration.
The researchers identified six biomarkers for cerebrospinal fluid. For example, higher levels of G-CSF — a growth factor known to have protective effects on motor neurons, the cells that die in ALS — predicts a longer disease duration.
Perhaps most importantly, the results suggest that a combination of biomarkers from both plasma and cerebrospinal fluid better predict disease duration.
While the size of this study is small, the ability of the specific biomarkers used to predict prognosis suggests that the approach holds promise.
"The results argue for the usefulness of researching this approach for ALS both in terms of predicting disease progression and in terms of determining the impact of therapeutic strategies," Connor said. "The results present a compelling starting point for the use of this method in larger studies and provide insights for novel therapeutic targets."
Imagine kicking a cocaine addiction by simply popping a pill that alters the way your brain processes chemical addiction. New research from the University of Pittsburgh suggests that a method of biologically manipulating certain neurocircuits could lead to a pharmacological approach that would weaken post-withdrawal cocaine cravings. The findings have been published in Nature Neuroscience.

Researchers led by Pitt neuroscience professor Yan Dong used rat models to examine the effects of cocaine addiction and withdrawal on nerve cells in the nucleus accumbens, a small region in the brain that is commonly associated with reward, emotion, motivation, and addiction. Specifically, they investigated the roles of synapses—the structures at the ends of nerve cells that relay signals.
When an individual uses cocaine, some immature synapses are generated, which are called “silent synapses” because they send few signals under normal physiological conditions. After that individual quits using cocaine, these “silent synapses” go through a maturation phase and acquire the ability to send signals. Once they can send signals, the synapses will send craving signals for cocaine if the individual is exposed to cues that previously led him or her to use the drug.
The researchers hypothesized that if they could reverse the maturation of the synapses, the synapses would remain silent, thus rendering them unable to send craving signals. They examined a chemical receptor known as CP-AMPAR that is essential for the maturation of the synapses. In their experiments, the synapses reverted to their silent states when the receptor was removed.
“Reversing the maturation process prevents the intensification process of cocaine craving,” said Dong, the study’s corresponding author and assistant professor of neuroscience in Pitt’s Kenneth P. Dietrich School of Arts and Sciences. “We are now developing strategies to maintain the ‘reversal’ effects. Our goal is to develop biological and pharmacological strategies to produce long-lasting de-maturation of cocaine-generated silent synapses.”
For some cancer patients, the mental fogginess that develops with chemotherapy lingers long after treatment ends. Now research in breast cancer patients may offer an explanation.

Patients who experience “chemobrain” following treatment for breast cancer show disruptions in brain networks that are not present in patients who do not report cognitive difficulties, according to researchers at Washington University School of Medicine in St. Louis.
Results of the small study were reported Thursday, Dec. 12 at a poster presentation at the San Antonio Breast Cancer Symposium.
According to the researchers, many breast cancer patients who receive chemotherapy report long-term problems with memory, attention, learning, visual-spatial skills and other forms of information processing. The brain mechanisms contributing to these difficulties are poorly understood.
The investigators used an imaging technique called resting state functional-connectivity magnetic resonance imaging (rs-fcMRI) to assess the wiring among regions of the brain in 28 patients treated at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University. Fifteen patients reported they were “extremely” or “strongly” affected by cognitive difficulties. The remaining 13 reported no cognitive impairment.
The imaging studies suggest that standard chemotherapy given to breast cancer patients may alter connectivity in brain networks, especially in the frontal parietal control regions responsible for executive function, attention and decision-making.
“Chemobrain is most likely a global phenomenon in the brain, but a set of regions involved in executive control, called the frontal-parietal network, is perhaps the most affected brain system,” said Jay F. Piccirillo, MD, professor of otolaryngology and a member of the research team with expertise in the use of brain imaging to study tinnitus, or phantom noise. “We’re confirming previous studies that also have shown this. And we’re developing a solid multidisciplinary working group at Washington University to determine how we can help these women.”
Other studies also have used neuroimaging techniques to observe the neural disruptions associated with Alzheimer’s disease, depression and stroke. Washington University researchers are beginning to investigate whether cancer patients experiencing chemobrain may benefit from therapies similar to those that help patients with other cognitive disorders.
New research from the Norwegian University of Science and Technology shows that if you want to be good at math, you have to practice all different kinds of maths.

What makes someone good at math? A love of numbers, perhaps, but a willingness to practice, too. And even if you are good at one specific type of math, you can’t trust your innate abilities enough to skip practicing other types if you want to be good.
New research at the Norwegian University of Science and Technology (NTNU) in Trondheim could have an effect on how math is taught. If you want to be really good at all types of math, you need to practice them all. You can’t trust your innate natural talent to do most of the job for you.
This might seem obvious to some, but it goes against the traditional view that if you are good at math, it is a skill that you are simply born with.
Professor Hermundur Sigmundsson at Department of Psychology is one of three researchers involved in the project. The results have been published in Psychological Reports.
The numbers
The researchers tested the math skills of 70 Norwegian fifth graders, aged 10.5 years on average. Their results suggest that it is important to practice every single kind of math subject to be good at all of them, and that these skills aren’t something you are born with.
“We found support for a task specificity hypothesis. You become good at exactly what you practice,” Sigmundsson says.
Nine types of math tasks were tested, from normal addition and subtraction, both orally and in writing, to oral multiplication and understanding the clock and the calendar.
“Our study shows little correlation between (being good at) the nine different mathematical skills, Sigmundsson said. “For instance there is little correlation between being able to solve a normal addition in the form of ‘23 + 67’ and addition in the form of a word problem.”
This example might raise a few eyebrows. Perhaps basic math is not a problem for the student, but the reading itself is. Up to 20 per cent of Norwegian boys in secondary school have problems with reading.
Sigmundsson also finds support in everyday examples.
“Some students will be good at geometry, but not so good at algebra,” he says.
If that is the case they have to practice more algebra, which is the area where most students in secondary school have problems.
“At the same time this means there is hope for some students. Some just can’t be good at all types of math, but at least they can be good at geometry, for example,” he says.
It is this finding that might in the end help change the way math is taught.
Support in neurology
The fact that you are good at precisely what you practice is probably due to the fact that different kinds of practice activate different neural connections.
The results can also be transferred to other areas. The football player who practices hitting the goal from 25 yards with a perfectly placed shot will become good at exactly this. But she is not necessarily good at tackling or reading the game.
“This is also supported by new insights in neurology. With practice you develop specific neural connections,” says Sigmundsson.
Team at IST Austria examines synaptic mechanisms of rhythmic brain waves • Achievement possible through custom-design tools developed in collaboration with the institute’s Miba machine shop

How information is processed and encoded in the brain is a central question in neuroscience, as it is essential for high cognitive function such as learning and memory. Theta-gamma oscillations are “brain waves” observed in the hippocampus of behaving rats, a brain region involved in learning and memory. In rodents, theta-gamma oscillations are associated with information processing during exploration and spatial navigation. However, the underlying synaptic mechanisms have so far remained unclear. In research published this week in the journal Neuron, postdoc Alejandro Pernía-Andrade and Professor Peter Jonas, both at the Institute of Science and Technology Austria (IST Austria), discovered the synaptic mechanisms underlying oscillations at the dentate gyrus (main entrance of the hippocampus). Furthermore, the researchers suggest a role for these oscillations in the coding of information by the dentate gyrus principal neurons. Thus, these findings contribute to a better understanding of how information is processed in the brain.
Brain oscillations are, in fact, rhythmic changes in voltage in the extracellular space, referred to as electrical brain signals associated with the processing of information. These electrical signals are similar to those seen in electro-encephalographic recordings (EEG) in humans. Pernía-Andrade and Jonas observed these oscillations in a brain region called the hippocampus in behaving rats, and recorded oscillations occurring in this area using extracellular probes. To understand how oscillations are generated and which synaptic events trigger these oscillations, the researchers looked at synaptic transmission in granule cells (principal cells at the main entrance of the hippocampus) from both the extracellular (oscillations) and the intracellular perspectives (synaptic currents and neuronal firing), and then correlated the two. They discovered that excitatory and inhibitory synaptic signals contributed to different frequencies of oscillations, with excitation from the entorhinal cortex generating theta oscillations and inhibition by local dentate gyrus interneurons generating gamma oscillations. Together, excitation and inhibition provide the rhythmic signals of oscillations. It has been speculated that oscillations may help the dentate gyrus to encode information by acting as reference signals in temporal coding. Pernía-Andrade and Jonas now show that granule cell neurons send signals only at specific times in the cycle of oscillations. This so-called “phase locking” is necessary if oscillations are to function as reference signals in temporal coding.
The precise, high-resolution recording from granule cells necessary for these discoveries was possible only through technological innovations by Pernía-Andrade and Jonas, as previously no equipment was available to record synaptic signals in active rats in such high resolution. They are the result of a collaboration with the Miba machine shop, IST Austria’s electrical and mechanical SSU (Scientific Service Unit). Adapting commercially available equipment and custom-designing tools, Pernía-Andrade, Jonas and Todor Asenov, manager of the Miba machine shop, produced the first tools for precise biophysical analysis in active rats. This research is therefore not only a scientific advance but also represents a significant technological and conceptual progress in the quest to understand neuronal behavior under natural conditions.
TAU researchers find unresponsive patients’ brains may recognize photographs of their family and friends

Patients in a vegetative state are awake, breathe on their own, and seem to go in and out of sleep. But they do not respond to what is happening around them and exhibit no signs of conscious awareness. With communication impossible, friends and family are left wondering if the patients even know they are there.
Now, using functional magnetic resonance imaging (fMRI), Dr. Haggai Sharon and Dr. Yotam Pasternak of Tel Aviv University’s Functional Brain Center and Sackler Faculty of Medicine and the Tel Aviv Sourasky Medical Center have shown that the brains of patients in a vegetative state emotionally react to photographs of people they know personally as though they recognize them.
"We showed that patients in a vegetative state can react differently to different stimuli in the environment depending on their emotional value," said Dr. Sharon. "It’s not a generic thing; it’s personal and autobiographical. We engaged the person, the individual, inside the patient."
The findings, published in PLOS ONE, deepen our understanding of the vegetative state and may offer hope for better care and the development of novel treatments. Researchers from TAU’s School of Psychological Sciences, Department of Neurology, and Sagol School of Neuroscience and the Loewenstein Hospital in Ranaana contributed to the research.
Talking to the brain
For many years, patients in a vegetative state were believed to have no awareness of self or environment. But in recent years, doctors have made use of fMRI to examine brain activity in such patients. They have found that some patients in a vegetative state can perform complex cognitive tasks on command, like imagining a physical activity such as playing tennis, or, in one case, even answering yes-or-no questions. But these cases are rare and don’t provide any indication as to whether patients are having personal emotional experiences in such a state.
To gain insight into “what it feels like to be in a vegetative state,” the researchers worked with four patients in a persistent (defined as “month-long”) or permanent (persisting for more than three months) vegetative state. They showed them photographs of people they did and did not personally know, then gauged the patients’ reactions using fMRI, which measures blood flow in the brain to detect areas of neurological activity in real time. In response to all the photographs, a region specific to facial recognition was activated in the patients’ brains, indicating that their brains had correctly identified that they were looking at faces.
But in response to the photographs of close family members and friends, brain regions involved in emotional significance and autobiographical information were also activated in the patients’ brains. In other words, the patients reacted with activations of brain centers involved in processing emotion, as though they knew the people in the photographs. The results suggest patients in a vegetative state can register and categorize complex visual information and connect it to memories – a groundbreaking finding.
The ghost in the machine
However, the researchers could not be sure if the patients were conscious of their emotions or just reacting spontaneously. So they then verbally asked the patients to imagine their parents’ faces. Surprisingly, one patient, a 60-year-old kindergarten teacher who was hit by a car while crossing the street, exhibited complex brain activity in the face- and emotion-specific brain regions, identical to brain activity seen in healthy people. The researchers say her response is the strongest evidence yet that vegetative-state patients can be “emotionally aware.” A second patient, a 23-year-old woman, exhibited activity just in the emotion-specific brain regions. (Significantly, both patients woke up within two months of the tests. They did not remember being in a vegetative state.)
"This experiment, a first of its kind, demonstrates that some vegetative patients may not only possess emotional awareness of the environment but also experience emotional awareness driven by internal processes, such as images," said Dr. Sharon.
Research focused on the “emotional awareness” of patients in a vegetative state is only a few years old. The researchers hope their work will eventually contribute to improved care and treatment. They have also begun working with patients in a minimally conscious state to better understand how regions of the brain interact in response to familiar cues. Emotions, they say, could help unlock the secrets of consciousness.
A faultily formed memory sounds like hitting random notes on a keyboard while a proper one sounds more like a song, scientists say.

When they turned off a major switch for learning and memory, brain cells communicated, but the relationship was superficial, said Dr. Joe Tsien, neuroscientist at the Medical College of Georgia at Georgia Regents University and Co-Director of the GRU Brain & Behavior Discovery Institute.
“We have begun to crack the neural code, which allows us to look in real time at how thoughts happen and how memories are made,” Tsien said. “That has enabled us to understand for the first time how and whether the right keys are struck at the right time and in the right place and manner to make the beautiful sound of coherent memories and to compare what happens when a key element is missing.”
With the NMDA receptor intact, chatter reverberates, associations are made and helpful memories – like how touching a hot stove results in a burn – are easily retrieved.
“You see a face and think of a name, you see your office, and you think you need to work; everything is associative,” said Tsien, corresponding author of the study in the journal PLOS ONE. “But in mice lacking an NMDA receptor, you can tell the memory patterns are dull and dissociated.”
Using the century-old Pavlovian conditioning model that first showed how repetition creates association, they found that mice lacking a functioning NMDA receptor in the hippocampus, the brain’s center of learning and memory, could not recollect even something fearful.
When they played a tone, followed 20 seconds later by a mild foot shock, normal mice quickly made the association, down to the timing. The connection essentially never registered with mice lacking the NMDA receptor. Healthy brain recalling memories and Amnesic brain recalling contextual memories
“They form the initial patterns, but don’t rehearse them,” said Tsien. “Their tones are flat, the association is poor, while everything we register in the healthy brain is associative.” To illustrate just how flat, Postdoctoral Fellow Hui Kuang assigned musical notes to the memory activity of each, which resulted in random noise by the NMDA knockout mice compared to a dynamic rhythm from normal mice.
“By knowing what these patterns look like and what they mean, you can use this signature to measure, for example, during aging, why we begin to lose memory and to identify and test drugs that are truly effective at aiding memory,” Tsien said.
“You can tell whether there is an issue with reverberation, whether your brain is repeating what you need to remember, or repeats it but somehow stores it badly, so it’s not associated with the right things. This study has revealed a lot of fascinating details about what neuroscientists call the brain’s neural code” Tsien said.”
He wants to look at how aging affects these processes as a next step. The research team also is looking at Doogie, a mouse genetically bred by Tsien and his team in 1999 to be exceptionally smart, to see if they can also learn more about how super memories are made and what they look like.
This ability to decode how and what the brain is remembering, should one day help physicians better assess and treat conditions such as Alzheimer’s and schizophrenia, Tsien said. They may find that some answers are already out there, such as drugs that boost reverberation, or a stimulant like caffeine to help retrieve a memory, Tsien said.
His team first reported decoding brain cell conversations as memories were formed and recalled in PLOS ONE in 2009. As with the new study, they used a computational algorithm to translate the neuronal conversations into some of the first pictures of what memories look like.
The act of laughing at a joke is the result of a two-stage process in the brain, first detecting an incongruity before then resolving it with an expression of mirth. The brain actions involved in understanding humour differ between young boys and girls. These are the conclusions reached by a US-based scientist supported by the Swiss National Science Foundation.

Since science has demonstrated that animals are also capable of planning into the future, the once deep cleft between the brain capacities of humans and animals is rapidly disappearing. Fortunately, we can still claim humour as our unique selling point. This makes it even more astonishing that researchers have considered this attribute but fleetingly (and have spent much more time on negative emotions such as fear), write the Swiss neuroscientist Pascal Vrticka and his US colleagues at Stanford University, in the journal “Nature Reviews Neuroscience”.
Strangely cheerful feelings
In their recently published article (*), the researchers demonstrate that, while laughter at a joke requires activity in many different areas of the brain, just two separate elements can be identified among the complex patterns of activity. In the first part, the brain detects a logical incongruity, which, in the second part, it proceeds to resolve. The ensuing feeling of cheerfulness arises from a brain activity that can be clearly differentiated from that of other positive emotions.
Moreover, in the study of 22 children aged between six and thirteen, the research team led by Vrticka showed that sex-specific differences in the processing of humour are formed early on in life. The researchers recorded the children’s brain activity while they were enjoying film clips that were either funny – slapstick home video – or entertaining – such as clips of children break-dancing. On average, the girls’ brains responded more to the funny scenes, while the boys showed greater reaction to the entertaining clips.
Benefits of improved understanding
Vrticka speculates that these sex-based differences could play a role in helping women to select a suitable (and humorous) mate. Aside from this, humour also plays a key role in psychological health. This is demonstrated, among other things, in the fact that adults with psychological disorders such as autism or depression often have a modified humour processing activity and respond less markedly to humour than people who do not have these disorders. Vrticka believes that an improved understanding of the processes that take place in our brain when we enjoy the effects of an amusing joke could be of great benefit in the development of treatments.
A study in mice shows a breakdown of the brain’s blood vessels may amplify or cause problems associated with Alzheimer’s disease. The results published in Nature Communications suggest that blood vessel cells called pericytes may provide novel targets for treatments and diagnoses.

“This study helps show how the brain’s vascular system may contribute to the development of Alzheimer’s disease,” said study leader Berislav V. Zlokovic, M.D. Ph.D., director of the Zilkha Neurogenetic Institute at the Keck School of Medicine of the University of Southern California, Los Angeles. The study was co-funded by the National Institute of Neurological Diseases and Stroke (NINDS) and the National Institute on Aging (NIA), parts of the National Institutes of Health
Alzheimer’s disease is the leading cause of dementia. It is an age-related disease that gradually erodes a person’s memory, thinking, and ability to perform everyday tasks. Brains from Alzheimer’s patients typically have abnormally high levels of plaques made up of accumulations of beta-amyloid protein next to brain cells, tau protein that clumps together to form neurofibrillary tangles inside neurons, and extensive neuron loss.
Vascular dementias, the second leading cause of dementia, are a diverse group of brain disorders caused by a range of blood vessel problems. Brains from Alzheimer’s patients often show evidence of vascular disease, including ischemic stroke, small hemorrhages, and diffuse white matter disease, plus a buildup of beta-amyloid protein in vessel walls. Furthermore, previous studies suggest that APOE4, a genetic risk factor for Alzheimer’s disease, is linked to brain blood vessel health and integrity.
“This study may provide a better understanding of the overlap between Alzheimer’s disease and vascular dementia,” said Roderick Corriveau, Ph.D., a program director at NINDS.
One hypothesis about Alzheimer’s disease states that increases in beta-amyloid lead to nerve cell damage. This is supported by genetic studies that link familial forms of the disease to mutations in amyloid precursor protein (APP), the larger protein from which plaque-forming beta-amyloid molecules are derived. Nonetheless, previous studies on mice showed that increased beta-amyloid levels reproduce some of the problems associated with Alzheimer’s. The animals have memory problems, beta-amyloid plaques in the brain and vascular damage but none of the neurofibrillary tangles and neuron loss that are hallmarks of the disease.
In this study, the researchers show that pericytes may be a key to whether increased beta-amyloid leads to tangles and neuron loss.
Pericytes are cells that surround the outside of blood vessels. Many are found in a brain plumbing system, called the blood-brain barrier. It is a network that exquisitely controls the movement of cells and molecules between the blood and the interstitial fluid that surrounds the brain’s nerve cells. Pericytes work with other blood-brain barrier cells to transport nutrients and waste molecules between the blood and the interstitial brain fluid.
To study how pericytes influence Alzheimer’s disease, Dr. Zlokovic and his colleagues crossbred mice genetically engineered to have a form of APP linked to familial Alzheimer’s with ones that have reduced levels of platelet-derived growth factor beta receptor (PDGFR-beta), a protein known to control pericyte growth and survival. Previous studies showed that PDGFR-beta mutant mice have fewer pericytes than normal, decreased brain blood flow, and damage to the blood-brain barrier.
“Pericytes act like the gatekeepers of the blood-brain barrier,” said Dr. Zlokovic.
Both the APP and PDGFR-beta mutant mice had problems with learning and memory. Crossbreeding the mice slightly enhanced these problems. The mice also had increased beta-amyloid plaque deposition near brain cells and along brain blood vessels. Surprisingly, the brains of the crossbred mice had enhanced neuronal cell death and extensive neurofibrillary tangles in the hippocampus and cerebral cortex, regions that are typically affected during Alzheimer’s.
“Our results suggest that damage to the vascular system may be a critical step in the development of full-blown Alzheimer’s disease pathology,” said Dr. Zlokovic.
Further experiments suggested that pericytes may transport beta-amyloid across the blood-brain barrier into the blood and showed that crossbreeding the mice slowed the rate at which beta-amyloid was cleared away from nerve cells in the brain.
Next, the researchers addressed how beta-amyloid may affect the vascular system. The crossbred mutants had more pericyte death and more damage to the blood-brain barrier than the PDGFR-beta mutant mice, suggesting beta-amyloid may enhance vascular damage. The investigators also confirmed previous findings showing that beta-amyloid accumulation leads to pericyte death.
Dr. Zlokovic and his colleagues concluded that their results support a two-hit vascular hypothesis of Alzheimer’s. The hypothesis states that the toxic effects of increased beta-amyloid deposition on pericytes in aged blood vessels leads to a breakdown of the blood-brain barrier and a reduced ability to clear amyloid from the brain. In turn, the progressive accumulation of beta-amyloid in the brain and death of pericytes may become a damaging feedback loop that causes dementia. If true, then pericytes and other blood-brain barrier cells may be new therapeutic targets for treating Alzheimer’s disease.
If you have ever said or done the wrong thing at the wrong time, you should read this. Neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) and the University of California, San Diego, have successfully demonstrated a technique to enhance a form of self-control through a novel form of brain stimulation.

Study participants were asked to perform a simple behavioral task that required the braking/slowing of action – inhibition – in the brain. In each participant, the researchers first identified the specific location for this brake in the prefrontal region of the brain. Next, they increased activity in this brain region using stimulation with brief and imperceptible electrical charges. This led to increased braking – a form of enhanced self-control.
This proof-of-principle study appears in the Dec. 11 issue of The Journal of Neuroscience and its methods may one day be useful for treating attention deficit hyperactivity disorder (ADHD), Tourette’s syndrome and other severe disorders of self-control.
“There is a circuit in the brain for inhibiting or braking responses,” said Nitin Tandon, M.D., the study’s senior author and associate professor in The Vivian L. Smith Department of Neurosurgery at the UTHealth Medical School. “We believe we are the first to show that we can enhance this braking system with brain stimulation.”
A computer stimulated the prefrontal cortex exactly when braking was needed. This was done using electrodes implanted directly on the brain surface.
When the test was repeated with stimulation of a brain region outside the prefrontal cortex, there was no effect on behavior, showing the effect to be specific to the prefrontal braking system.
This was a double-blind study, meaning that participants and scientists did not know when or where the charges were being administered.
The method of electrical stimulation was novel in that it apparently enhanced prefrontal function, whereas other human brain stimulation studies mostly disrupt normal brain activity. This is the first published human study to enhance prefrontal lobe function using direct electrical stimulation, the researchers report.
The study involved four volunteers with epilepsy who agreed to participate while being monitored for seizures at the Mischer Neuroscience Institute at Memorial Hermann-Texas Medical Center (TMC). Stimulation enhanced braking in all four participants.
Tandon has been working on self-control research with researchers at the University of California, San Diego, for five years. “Our daily life is full of occasions when one must inhibit responses. For example, one must stop speaking when it’s inappropriate to the social context and stop oneself from reaching for extra candy,” said Tandon, who is a neurosurgeon with the Mischer Neuroscience Institute at Memorial Hermann-TMC.
The researchers are quick to point out that while their results are promising, they do not yet point to the ability to improve self-control in general. In particular, this study does not show that direct electrical stimulation is a realistic option for treating human self-control disorders such as obsessive-compulsive disorder, Tourette’s syndrome and borderline personality disorder. Notably, direct electrical stimulation requires an invasive surgical procedure, which is now used only for the localization and treatment of severe epilepsy.
A research team from The University of Nottingham has helped uncover a second rare genetic mutation which strongly increases the risk of Alzheimer’s disease in later life.

In an international collaboration, the University’s Translational Cell Sciences Human Genetics research group has pinpointed a rare coding variation in the Phospholipase D3 (PLD3) gene which is more common in people with late-onset Alzheimer’s than non-sufferers.
The discovery is an important milestone on the road to early diagnosis of the disease and eventual improved treatment. Having surveyed the human genome for common variants associated with Alzheimer’s, geneticists are now turning the spotlight on rare mutations which may be even stronger risk factors.
More than 820,000 people in the UK have dementia and the number is rising as the population ages. The condition, of which Alzheimer’s disease is the predominant cause, costs the UK economy £23 billion per year, much more than other diseases like cancer and heart disease.
Nottingham’s genetic experts have been working with long-term partners from Washington University, St Louis, USA and University College, London, to carry out next-generation whole exome sequencing on families where Alzheimer’s affects several members.
Earlier this year the collaboration uncovered the first ever rare genetic mutation implicated in disease risk, linking the TREM2 gene to a higher risk of Alzheimer’s (published in the New England Journal of Medicine). Now, in a new study published today in the international journal, Nature, the team reveal that after analysis of the genes of around 2,000 people with Alzheimer’s, a second genetic variation has been found, in the PLD3 gene.
PLD3 influences processing of amyloid precursor protein which results in the generation of the characteristic amyloid plaques seen in AD brain tissue, suggesting that it may be a potential therapeutic target.
The international research team used Nottingham’s Alzheimer’s Research UK DNA bank, one of the largest collections of DNA from Alzheimer’s patients, to completely sequence the entire coding region (exome) of the PLD3 gene. The results showed several mutations in the gene occurred more frequently in people who had the disease than in non-sufferers. Carriers of PLD3 coding variants showed a two-fold increased risk for the disease.
Leading the team at Nottingham, Professor of Human Genomics and Molecular Genetics, Kevin Morgan, said:
“This second crucial discovery has confirmed that this latest scientific approach does deliver, it is able to find these clues. However, it is also inferring that there are lots more AD-significant variations out there and before we can use it for diagnosis we need to find all of the other genetic variations involved in Alzheimer’s too.
“Our research is forming the basis of potential diagnostics later on and more importantly it shows pathways that can be diagnostic targets which could lead to therapeutic interventions in the future.
“The next step will be to examine how this particular rare gene variant functions in the cell and see if it can be targeted, to see if there are any benefits to finding out how this gene operates in both normal and diseased cells. If we can do this, we may be able eventually to correct the defect with drug therapy. Here in Nottingham we will keep looking for more rare gene variations.
“Even if we could eventually slow or halt the progress of the disease with new drugs rather than curing it completely, the benefits would be huge in terms of the real impact on patients’ lives and also in vast savings to the health economy. The group The University of Nottingham has played a significant role in all of the recent AD genetics discoveries that have highlighted 20 new regions of interest in the genome in the last five years and we will continue to do so into the future.”
Rebecca Wood, Chief Executive of Alzheimer’s Research UK, the UK’s leading dementia research charity, said: “Advances in genetic technology are allowing researchers to understand more than ever about the genetic risk factors for the most common form of Alzheimer’s. This announcement, made just off the back of the G8 dementia research summit, is a timely reminder of the progress that can be made by worldwide collaboration. We know that late-onset Alzheimer’s is caused by a complex mix of risk factors, including both genetic and lifestyle. Understanding all of these risk factors and how they work together to affect someone’s likelihood of developing Alzheimer’s is incredibly important for developing interventions to slow the onset of the disease. Alzheimer’s Research UK is proud to have contributed to this discovery, both by funding researchers and through the establishment of a DNA collection that has been used in many of the recent genetic discoveries in Alzheimer’s.”
Most people – including scientists – assumed we can’t just sniff out danger.
It was thought that we become afraid of an odor – such as leaking gas – only after information about a scary scent is processed by our brain.

But neuroscientists at Rutgers University studying the olfactory – sense of smell – system in mice have discovered that this fear reaction can occur at the sensory level, even before the brain has the opportunity to interpret that the odor could mean trouble.
In a new study published today in Science, John McGann, associate professor of behavioral and systems neuroscience in the Department of Psychology, and his colleagues, report that neurons in the noses of laboratory animals reacted more strongly to threatening odors before the odor message was sent to the brain.
“What is surprising is that we tend to think of learning as something that only happens deep in the brain after conscious awareness,” says McGann whose laboratory studies the sense of smell. “But now we see how the nervous system can become especially sensitive to threatening stimuli and that fear-learning can affect the signals passing from sensory organs to the brain.”
McGann and students Marley Kass and Michelle Rosenthal made this discovery by using light to observe activity in the brains of genetically engineered mice through a window in the mouse’s skull. They found that those mice that received an electric shock simultaneously with a specific odor showed an enhanced response to the smell in the cells in the nose, before the message was delivered to the neurons in the brain.
This new research – which indicates that fearful memories can influence the senses – could help to better understand conditions like Post Traumatic Stress Disorder, in which feelings of anxiety and fear exist even though an individual is no longer in danger.
“We know that anxiety disorders like PTSD can sometimes be triggered by smell, like the smell of diesel exhaust for a soldier,” says McGann who received funding from the National Institute of Mental Health and the National Institute on Deafness and Other Communication Disorders for this research. “What this study does is gives us a new way of thinking about how this might happen.”
In their study, the scientists also discovered a heightened sensitivity to odors in the mice traumatized by shock. When these mice smelled the odor associated with the electrical shocks, the amount of neurotransmitter – chemicals that carry communications between nerve cells – released from the olfactory nerve into the brain was as big as if the odor were four times stronger than it actually was.
This created mice whose brains were hypersensitive to the fear-associated odors. Before now, scientists did not think that reward or punishment could influence how the sensory organs process information.
The next step in the continuing research, McGann says, is to determine whether the hypersensitivity to threatening odors can be reversed by using exposure therapy to teach the mice that the electrical shock is no longer associated with a specific odor. This could help develop a better understanding of fear learning that might someday lead to new therapeutic treatments for anxiety disorders in humans, he says.
To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.
In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.
“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.
Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, just by doing a lot of studying and being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.
Measuring cognition
This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.
The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.
The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.
In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.
“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.
Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.
However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills.
The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.
Implications for education
Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.
He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.
“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.
Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.
Scientists who fed a cocktail of key amino acids to mice improved sleep disturbances caused by brain injuries in the animals. These new findings suggest a potential dietary treatment for millions of people affected by traumatic brain injury (TBI)—a condition that is currently untreatable.

“If this type of dietary treatment is proved to help patients recover function after traumatic brain injury, it could become an important public health benefit,” said study co-leader Akiva S. Cohen, Ph.D., a neuroscientist at The Children’s Hospital of Philadelphia (CHOP).
Cohen is the co-senior author of the animal TBI study appearing today in Science Translational Medicine. He collaborated with two experts in sleep medicine: co-senior author Allan I. Pack, M.D., Ph.D., director of the Center for Sleep and Circadian Neurobiology in the Perelman School of Medicine at the University of Pennsylvania; and first author Miranda M. Lim, M.D., Ph.D., formerly at the Penn Sleep Center, and now on faculty at the Portland VA Medical Center and Oregon Health and Science University.
Every year in the U.S., an estimated 2 million people suffer a TBI, accounting for a major cause of disability across all age groups. Although 75 percent of reported TBI cases are milder forms such as concussion, even concussion may cause chronic neurological impairments, including cognitive, motor and sleep problems.
“Sleep disturbances, such as excessive daytime sleepiness and nighttime insomnia, disrupt quality of life and can delay cognitive recovery in patients with TBI,” said Lim, a neurologist and sleep medicine specialist. Although physicians can relieve the dangerous swelling that occurs after a severe TBI, there are no existing treatments to address the underlying brain damage associated with neurobehavioral problems such as impaired memory, learning and sleep patterns.
Cohen and team investigate the use of selected branched chain amino acids (BCAA)—precursors of the neurotransmitters glutamate and GABA, which are involved in communication among neurons and help to maintain a normal balance in brain activity. His research team previously showed that a BCAA diet restored cognitive ability in brain-injured mice. The current study was the first to analyze sleep-wake patterns in an animal model.
Comparing mice with experimentally induced mild TBI to uninjured mice, the scientists found the injured mice were unable to stay awake for long periods of time. The injured mice had lower activity among orexin neurons, which help to maintain the animals’ wakefulness. This is similar to results in human studies showing decreased orexin levels in the spinal fluid after TBI.
In the current study, the dietary therapy restored the orexin neurons to a normal activity level and improved wakefulness in the brain-injured mice. EEG recordings also showed improved brain wave patterns among the mice that consumed the BCAA diet.
“These results in an animal model provide a proof-of-principle for investigating this dietary intervention as a treatment for TBI patients,” said Cohen. “If a dietary supplement can improve sleeping and waking patterns as well as cognitive problems, it could help brain-injured patients regain crucial functions.” Cohen cautioned that current evidence does not support TBI patients medicating themselves with commercially available amino acids.
Huntington’s disease is a devastating, incurable disorder that results from the death of certain neurons in the brain. Its symptoms show as progressive changes in behavior and movements.

The neurodegenerative disease is caused by a defect in the huntingtin gene (Htt) that causes an abnormal expansion in a part of DNA, called a CAG codon or triplet that codes for the amino acid glutamine. A healthy version of the Htt gene has between 20 and 23 CAG triplets. The mutational expansion in Htt can lead to long repeats of the CAG triplet, resulting in the mutant protein having a long sequence of several glutamine residues called a polyglutamine tract. This CAG triplet expansion in unrelated genes is the root of at least nine neurodegenerative disorders, including Huntington’s disease.
Rohit Pappu, PhD, professor of biomedical engineering at Washington University in St. Louis, and his colleagues in the School of Engineering & Applied Science and in the School of Medicine, are working to understand how expanded polyglutamine tracts form the types of supramolecular structures that are presumed to be toxic to neurons – a feature that polyglutamine expansions share with proteins associated with Alzheimer’s disease and Parkinson’s disease.
In recent work, Pappu and his research team showed that the amino acid sequences on either side of the polyglutamine tract within Htt can act as natural gatekeepers because they control the fundamental ability of polyglutamine tracts to form structures that are implicated in cellular toxicity. The results were published in PNAS Early Edition Nov.25.
“These are progressive onset disorders,” Pappu says. “The longer the polyglutamine tract gets, the more severe the disease, and the symptoms worsen with age. Our results are exciting because it means that any success we have in mimicking the effects of naturally occurring gatekeepers would be a significant step forward. And mechanistic studies are important in this regard because they enable us to learn from nature’s own strategies.
“Previous studies from other labs showed that the toxic effects of polyglutamine expansions are tempered by the sequence contexts of polyglutamine tracts in Htt, not just the lengths of the polyglutamine tracts”, Pappu says.
He and his research team focused on understanding the effects of sequence stretches that lie on either side of the polyglutamine tract in Htt. The results show that the N-terminal stretch accelerates the formation of ordered structures that are presumed to be benign to cells, whereas the C-terminal stretch slows the overall transition into structures that are expected to create trouble for cells, suggesting that these naturally occurring sequences behave as gatekeepers.
“It appears that where polyglutamine stretches are of functional importance, nature has ensured that they are flanked by gatekeeping sequences,” Pappu says.
Pappu and his team are now working to find way s to mimic the effects of the N- and C-terminal flanking sequences from Htt. His team is working closely with Marc Diamond, MD, the David Clayson Professor of Neurology at the School of Medicine, to understand how naturally occurring proteins interact with flanking sequences and see if they can coopt them to ameliorate the toxic functions in the polyglutamine expansions.
Scientists from Case Western Reserve University and University of Kansas Medical Center have restored behavior—in this case, the ability to reach through a narrow opening and grasp food—using a neural prosthesis in a rat model of brain injury.
Ultimately, the team hopes to develop a device that rapidly and substantially improves function after brain injury in humans. There is no such commercial treatment for the 1.5 million Americans, including soldiers in Afghanistan and Iraq, who suffer traumatic brain injuries (TBI), or the nearly 800,000 stroke victims who suffer weakness or paralysis in the United States, annually.
The prosthesis, called a brain-machine-brain interface, is a closed-loop microelectronic system. It records signals from one part of the brain, processes them in real time, and then bridges the injury by stimulating a second part of the brain that had lost connectivity.
Their work is published online this week in the science journal Proceedings of the National Academy of Sciences.
“If you use the device to couple activity from one part of the brain to another, is it possible to induce recovery from TBI? That’s the core of this investigation,” said Pedram Mohseni, professor of electrical engineering and computer science at Case Western Reserve, who built the brain prosthesis.
“We found that, yes, it is possible to use a closed-loop neural prosthesis to facilitate repair of a brain injury,” he said.
The researchers tested the prosthesis in a rat model of brain injury in the laboratory of Randolph J. Nudo, professor of molecular and integrative physiology at the University of Kansas. Nudo mapped the rat’s brain and developed the model in which anterior and posterior parts of the brain that control the rat’s forelimbs are disconnected.
Atop each animal’s head, the brain-machine-brain interface is a microchip on a circuit board smaller than a quarter connected to microelectrodes implanted in the two brain regions.
The device amplifies signals, which are called neural action potentials and produced by the neurons in the anterior of the brain. An algorithm separates these signals, recorded as brain spike activity, from noise and other artifacts. With each spike detected, the microchip sends a pulse of electric current to stimulate neurons in the posterior part of the brain, artificially connecting the two brain regions.
Two weeks after the prosthesis had been implanted and run continuously, the rat models using the full closed-loop system had recovered nearly all function lost due to injury, successfully retrieving a food pellet close to 70 percent of the time, or as well as normal, uninjured rats. Rat models that received random stimuli from the device retrieved less than half the pellets and those that received no stimuli retrieved about a quarter of them.
“A question still to be answered is must the implant be left in place for life?” Mohseni said. “Or can it be removed after two months or six months, if and when new connections have been formed in the brain?”
Brain studies have shown that, during periods of growth, neurons that regularly communicate with each other develop and solidify connections.
Mohseni and Nudo said they need more systematic studies to determine what happens in the brain that leads to restoration of function. They also want to determine if there is an optimal time window after injury in which they must implant the device in order to restore function.