Stroke rehabilitation researchers report improvement in spatial neglect with prism adaptation therapy. This new study supports behavioral classification of patients with spatial neglect as a valuable tool for assigning targeted, effective early rehabilitation. Results of the study, “Presence of motor-intentional aiming deficit predicts functional improvement of spatial neglect with prism adaptation” were published ahead of print in Neurorehabilitation and Neural Repair on December 27, 2013.

The article is authored by Kelly M. Goedert, PhD, of Seton Hall University, Peii Chen, PhD, of Kessler Foundation, Raymond C. Boston, PhD, of the University of Pennsylvania, Anne L. Foundas, MD, of the University of Missouri, and A.M. Barrett, MD, director of Stroke Rehabilitation Research at Kessler Foundation, and chief of Neurorehabilitation Program Innovation at Kessler Institute for Rehabilitation. Drs. Barrett and Chen have faculty appointments at Rutgers New Jersey Medical School.
Spatial neglect, an under-recognized but disabling disorder, often complicates recovery from right brain stroke,” noted Dr. Barrett. “Our study suggests we need to know what kind of neglect patients have in order to assign treatment.” The research team tested the hypothesis that classifying patients by their spatial neglect profile, i.e., by Where (perceptional-intentional) versus Aiming (motor-intentional) symptoms, would predict response to prism adaptation therapy. Moreover, they hypothesized that patients with Aiming bias would have better response to prism adaptation recovery than those with isolated Where bias.
The study involved 24 patients with right brain stroke who completed 2 weeks of prism adaptation treatment. Participants also completed the Behavioral Inattention Test and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for 6 weeks. Results showed that those with only Aiming deficits improved on the CBS, whereas those with only Where deficits did not improve. Participants with both types of deficits demonstrated intermediate improvement. “These findings suggest that patients with spatial neglect and Aiming deficits may benefit the most from early intervention with prism adaptataion therapy,” said Dr. Barrett. “More broadly, classifying spatial deficits using modality-specific measures should be an important consideration of any stroke trial intending to obtain the most valid, applicable, and valuable results for recovery after right brain stroke.”
Reaching for Froot Loops and grabbing Lego pieces to build a tower are different challenges for toddlers. Depending on what they’re trying to do, tots tend to develop handedness for different tasks at different ages, according to new research.

Most people are right-handed. Babies start using their right hand to reach for cereal nuggets by age 1. However, children take until age 4 to show such a preference when building Lego models. The findings, published in this month’s issue of Developmental Psychobiology, imply tendencies to use one hand more than the other emerge depending on the tasks kids confront, rather than their age.
Preference for the right or left hand is, in part, genetic. Prior studies have shown that some of these one-sided tendencies emerge early. Fetuses suck their right thumb more often than their left; newborns on their back turn to the right more frequently. Most children grow up to be right-handed—in part because of these innate, early leanings, scientists believe.
But the timing of when one hand emerges as the dominant one for most tasks remained unclear.
"As a parent and a scientist, I was surprised to find researchers thought 3-year-olds don’t display a hand preference," said neurobiologist Claudia Gonzalez of the University of Lethbridge in Alberta, Canada.
To study how handedness emerged between ages 1 to 5, Gonzalez and her colleagues assigned about 50 tiny participants to a familiar task: grabbing a colorful object or a tasty tidbit. Children ages 1 to 2 picked up Froot Loops or Cheerios to munch at snack time. Four- and 5-year-olds grasped Lego blocks to build a small model. Three-year-old subjects tackled both tasks.
Even the youngest children had strong right-handed leanings when reaching for food, the team found. Three-year-olds were right-handed eaters, but they were just as likely to use their left hand when playing with blocks. The 4- and 5-year-olds used their left hand to hold the base of their model steady, but they manipulated blocks into the correct positions with their other hand—a clear preference for right-handedness.
"There is a developmental milestone between the ages of 3 and 4 when something clicks," Gonzalez said. "Maybe they become more skilled, or they understand the task better."
Until that developmental “click,” this study shows hand preference isn’t constant across tasks – regardless of a child’s age.
The study “uses a very clever design to get at the question of how handedness varies across tasks,” said Klaus Libertus, an infant development researcher at the University of Pittsburgh. “We did not know handedness is connected to tasks in this way. I would have expected the 3-year-olds to show the same pattern on both tasks, especially since the demands were so similar.”
Developing a hand preference might also correlate with other functions that rely strongly on just one side of the brain, such as language and certain decision-making skills, Gonzalez noted. Preliminary data from children in her lab suggests that when handedness is evident earlier, these other functions also mature more quickly.
Finding the right task to study handedness at different ages will give researchers a firmer grasp on how young brains develop right - or left -handed tendencies, she said.
"You could say hand preference develops before 1, or you could say it doesn’t emerge until age 4—just depending on what task you are looking at," said Gonzalez.
“Good to see you. I’m sorry. It sounds like you’ve had a tough, tough, week.” Spoken by a doctor to a cancer patient, that statement is an example of compassionate behavior observed by a University of Rochester Medical Center team in a new study published by the journal Health Expectations.

Rochester researchers believe they are the first to systematically pinpoint and catalogue compassionate words and actions in doctor-patient conversations. By breaking down the dialogue and studying the context, scientists hope to create a behavioral taxonomy that will guide medical training and education.
“In health care, we believe in being compassionate but the reality is that many of us have a preference for technical and biomedical issues over establishing emotional ties,” said senior investigator Ronald Epstein, M.D., professor of Family Medicine, Psychiatry, Oncology, and Nursing and director of the UR Center for Communication and Disparities Research.
Epstein is a national and international keynote speaker and investigator on mindfulness and communication in medical education.
His team recruited 23 oncologists from a variety of private and hospital-based oncology clinics in the Rochester, N.Y., area. The doctors and their stage III or stage IV cancer patients volunteered to be recorded during routine visits. Researchers then analyzed the 49 audio-recorded encounters that took place between November 2011 and June 2012, and looked for key observable markers of compassion.
In contrast to empathy – another quality that Epstein and his colleagues have studied in the medical community — compassion involves a deeper and more active imagination of the patient’s condition. An important part of this study, therefore, was to identify examples of the three main elements of compassion: recognition of suffering, emotional resonance, and movement towards addressing suffering.
Emotional resonance, or a sense of sharing and connection, was illustrated by this dialogue: Patient: “I should just get a room here.” Oncologist: “Oh, I hope you don’t really feel like you’re spending that much time here.”
Another conversation included this response from a physician to a patient, who complained about a drug patch for pain: “Who wants a patch that makes you drowsy, constipated and fuzzy? I’ll pass, thank you very much.”
Some doctors provided good examples of how they use humor to raise a patient’s spirits without deviating from the seriousness of the situation. In one case, for example, a patient was concerned that he would not be able to drink two liters of barium sulfite in preparation for a CT scan.
Doctor: “If you just get down one little cup it will tell us what’s going on in the stomach. What I tell people when we’re not being recorded is to take a cup and then pour the rest down the toilet and tell them you drank it all (laughter)… Just a creative interpretation of what you are supposed to take.”
Patient: “I love it, I love it. Well, I thank you for that. I’m prepared to do what I’ve got to do to get this right.”
Researchers evaluated tone of voice, animation that conveyed tenderness and understanding, and other ways in which doctors gave reassurances or psychology comfort.
Here’s an instance in which an oncologist encouraged a reluctant patient to follow through with a planned trip to Arizona: “You know, if you decide to do it, break down and allow somebody to meet you at the gates and use a cart or wheelchair to get you to your next gate and things like that. And having just sent my father-in-law off to Hawaii and told him he had to do that, he said no, no, I can get there. Just, it’s okay. Nobody is gonna look at you and say, ‘What’s an able-bodied man doing in a cart?’ Just, it’s okay. It’s part of setting limits.”
Researchers also observed non-verbal communication, such as pauses or sighs at appropriate times, as well as speech features and voice quality (tone, pitch, loudness) and other metaphorical language that conveyed certain attitudes and meaning.
Compassion unfolds over time, researchers concluded. During the process, physicians must challenge themselves to stay with a difficult discussion, which opens the door for the patient to admit uncertainty and grieve the loss of normalcy in life.
“It became apparent that compassion is not a quality of a single utterance but rather is made up of presence and engagement that suffuses an entire conversation,” the study said. First author, Rachel Cameron, B.A., is a student at the University of Rochester School of Medicine and Dentistry; the audio-recordings were reviewed by a diverse group of medical professionals with backgrounds in literature and linguistics, as well as palliative care specialists.
People who tell themselves to get excited rather than trying to relax can improve their performance during anxiety-inducing activities such as public speaking and math tests, according to a study published by the American Psychological Association.

“Anxiety is incredibly pervasive. People have a very strong intuition that trying to calm down is the best way to cope with their anxiety, but that can be very difficult and ineffective,” said study author Alison Wood Brooks, PhD, of Harvard Business School. “When people feel anxious and try to calm down, they are thinking about all the things that could go badly. When they are excited, they are thinking about how things could go well.”
Several experiments conducted at Harvard University with college students and members of the local community showed that simple statements about excitement could improve performance during activities that triggered anxiety. The study was published online in APA’s Journal of Experimental Psychology: General®.
In one experiment, 140 participants (63 men and 77 women) were told to prepare a persuasive public speech on why they would be good work partners. To increase anxiety, a researcher videotaped the speeches and said they would be judged by a committee. Before delivering the speech, participants were instructed to say “I am excited” or “I am calm.” The subjects who said they were excited gave longer speeches and were more persuasive, competent and relaxed than those who said they were calm, according to ratings by independent evaluators.
“The way we talk about our feelings has a strong influence on how we actually feel,” said Brooks, an assistant professor of business administration at Harvard Business School.
In another experiment, 188 participants (80 men and 108 women), were given difficult math problems after they read “try to get excited” or “try to remain calm.” A control group didn’t read any statement. Participants in the excited group scored 8 percent higher on average than the calm group and the control group, and they reported feeling more confident about their math skills after the test.
In a trial involving karaoke, 113 participants (54 men and 59 women) were randomly assigned to say that they were anxious, excited, calm, angry or sad before singing a popular rock song on a video game console. A control group didn’t make any statement. All of the participants monitored their heart rates using a pulse meter strapped onto a finger to measure their anxiety.
Participants who said they were excited scored an average of 80 percent on the song based on their pitch, rhythm and volume as measured by the video game’s rating system. Those who said they were calm, angry or sad scored an average of 69 percent, compared to 53 percent for those who said they were anxious. Participants who said they were excited also reported feeling more excited and confident in their singing ability.
Since both anxiety and excitement are emotional states characterized by high arousal, it may be easier to view anxiety as excitement rather than trying to calm down to combat performance anxiety, Brooks said.
“When you feel anxious, you’re ruminating too much and focusing on potential threats,” she said. “In those circumstances, people should try to focus on the potential opportunities. It really does pay to be positive, and people should say they are excited. Even if they don’t believe it at first, saying ‘I’m excited’ out loud increases authentic feelings of excitement.”
Ischemic strokes, caused by blood clots that can develop in the brain and cut off blood flow, make up more than 80 percent of strokes suffered in the U.S. annually. To date, the most effective treatment is the clot-dissolving thrombolysis drug tissue plasminogen activator, tPA. But tPA is a far-from-perfect solution, says Andrew Barreto, a neurologist at the University of Texas Health Science Center in Houston. “IV-tPA will help about 30 of 100 patients who receive it within the first 4.5 hours after stroke symptom onset,” Barreto says. “But, many patients are still disabled, so we need better treatments.”

Barreto and some of his colleagues think that ultrasound could be one of those treatments. Ultrasound has been a valuable tool for diagnosing and tracking strokes in the brain for years. Now, a wide variety of new technologies are making it possible for neurosurgeons to use ultrasound waves, which travel at frequencies too high for the human ear to pick up, to not only identify the signs of stroke such as blood clots in the brain but also to help treat them.
Barreto was a principal researcher in the recent study of the Clotbust device, a headband-like piece of equipment placed on a patient’s head that aims to use ultrasound directed to increase tPA’s effectiveness in breaking up clots in the brain. A preliminary test of the device, which fires 2-MHz pulses of ultrasound from a series of 18 transducers at 5-second intervals, found that it was safe to use in stroke patients. Now, the device is in the midst of effectiveness testing on a group of 830 stroke patients worldwide.
One of the sites involved in testing the device is Swedish Neuroscience Center in Seattle, where chief of neuroscience David Newell notes that preliminary results from the trial were promising. In safety trials, the Clotbust device combined with the thrombolysis drug tPA cleared 40 percent of clots in ischemic strokes in the first two hours after being used. That’s twice as effective as the 20 percent clearance rate usually achieved by tPA alone.
Clotbust isn’t the only tool of its kind being tested at Swedish. Newell and his colleagues are involved in testing three different types of ultrasound technologies for a variety of neurological ailments. Those include one technique devised by. Newell in collaboration with EKOS corporation, a Seattle-area company specializing in ultrasound-emitting catheters, which are designed to travel up a blood vessel and transmit ultrasound from an emitter at its tip, to help loosen blood clots. Newell and his colleagues have been testing a modified version of the EkoSonic catheter, which can more easily be placed directly in the brain and used to detect a different type of stroke known as intracerebral hemorrhage (ICH).
Caused by bleeding from ruptured blood vessels deep in the brain, ICH strokes are much harder to treat because of their location. They are also particularly deadly, with a mortality rate north of 50 percent. Even those who survive are likely to be left disabled or with long roads to recovery. The tPA may be effective in treating these strokes as well, breaking up the clots in the brain that form around the bleed and allowing fluid to be drained off before it can do lasting harm.
While the effectiveness of tPA in treating ICH is still being studied, Newell and his team used the repurposed EkoSonic catheter to improve delivery of clot-busting drugs to bleed sites deep in the brain, and their early results are promising. In an introductory round of tests on nine patients at Swedish, Newell and his colleagues found that clots accompanying hemorrhagic strokes were cleared three times faster by a combination of ultrasound and tPA than they were by drugs alone. By combining the two techniques, Newell said, he and his team could clear clots from most patients in the first day of treatment. He’s now working with the company that developed the technology on creating a new type of catheter, designed specifically for use within the brain, that combines drug delivery, ultrasound emission, and drainage in one tool.
Neither Clotbust nor the EkoSonic catheter uses ultrasound to physically destroy clots. Instead, the blasts of high-frequency sound produce “a micromechnical action that makes the lytic effect of tPA a lot more effective,” by improving the efficiency with which it is delivered. “Injecting tPA is like putting an ice cube in a drink and waiting for it to melt,” says Newell. “With ultrasound, it’s more akin to creating a snow flurry. The drug binds to more binding sites, and it does so a lot faster.”
That’s not the case in the third ultrasound device being tested at Swedish. The ExAblate Neuro device developed by Israeli company InsighTec uses thousands of beams of ultrasound focused on one spot to create intense heat at a targeted point in the brain. The ExAblate Neuro mimics the effects of a tool used in neurosurgery for years, the gamma knife, which uses highly focused radiation energy to cut out material like tumors or to create lesions that can lessen the effects of diseases like Parkinson’s or epilepsy. In the case of stroke, the Neuro could potentially superheat solidified clots, turning them to more easily cleared liquid.
Since it uses focused ultrasound rather than the dangerous radiation associated with the gamma knife, says Newell, ExAblate has the potential to perform similar surgeries that are more easily repeatable. Current gamma knife surgeries have to get it right the first time, as exposing patients to powerful radiation over and over again can be dangerous. Since ultrasound energy doesn’t carry the same exposure dangers, doctors could potentially do the same sort of treatments in smaller steps without raising concerns over patient health.
All three of these new methods are still in their experimental phases, but each one has the potential to transform—and improve—the way strokes and other ailments in the brain are treated. And that may be only the beginning of the potential for the techniques. “Ultrasound technology represents almost a whole new field in neurosurgery,” said Newell.
Researchers at Penn Medicine report in the December 25 issue of JAMA that a modified form of prolonged exposure therapy – in which patients revisit and recount aloud their trauma-related thoughts, feelings and situations – shows greater success than supportive counseling for treating adolescent PTSD patients who have been sexually abused.

Despite a high prevalence of posttraumatic stress disorder (PTSD) in adolescents, evidence-based treatments like prolonged exposure therapy for PTSD in this population have never been established.
“We hypothesized that prolonged exposure therapy could fill this gap and were eager to test its ability to provide benefit for adolescent patients,” says Edna Foa, PhD, professor of Clinical Psychology in the department of Psychiatry in the Perelman School of Medicine at the University of Pennsylvania, who developed prolonged exposure therapy.
The concern has been that prolonged exposure therapy, while the most established evidence-based treatment for adults with PTSD, could exacerbate PTSD symptoms in adolescent patients who have not mastered the coping skills necessary for this type of exposure to be safely provided.
Adolescence is often a time when children begin to test limits and are in and out of situations, both good and bad – situations that often determine the path their lives take into adulthood.
The six-year (2006-2012) study examined the benefit of a prolonged exposure program called prolonged exposure-A (PE-A), that was modified to meet the developmental stage of adolescents, and compared it with supportive counseling in 61 adolescent girls, ages 13-18, with sexual abuse-related PTSD. In the single-blind randomized clinical trial, 31 received prolonged exposure-A, and 30 got supportive counseling.
Each received 14 60- to- 90 minute sessions of either therapy in a community mental health setting. The counselors were familiar with supportive counseling but naïve to PE-A before the study; their PE-A training consisted of a 4-day workshop followed by supervision every second week.
Outcomes were assessed before treatment, mid-treatment and after treatment and at three, six and 12-month follow up. During treatment, patients receiving PE-A demonstrated greater decline in PTSD and depression symptom severity, and improvement in overall functioning. These differences were maintained throughout the 12-month follow up period.
“Another key finding of this research was that prolonged therapy can be administered in a community setting by professionals with no prior training in evidence-based treatments and can have a positive impact on this population,” Foa says.
Results also partly explain why the 2009 swine flu virus, and a vaccine against it, led to spikes in the sleep disorder.
As the H1N1 swine flu pandemic swept the world in 2009, China saw a spike in cases of narcolepsy — a mysterious disorder that involves sudden, uncontrollable sleepiness. Meanwhile, in Europe, around 1 in 15,000 children who were given Pandemrix — a now-defunct flu vaccine that contained fragments of the pandemic virus — also developed narcolepsy, a chronic disease.

Immunologist Elizabeth Mellins and narcolepsy researcher Emmanuel Mignot at Stanford University School of Medicine in California and their collaborators have now partly solved the mystery behind these events, while also confirming a longstanding hypothesis that narcolepsy is an autoimmune disease, in which the immune system attacks healthy cells.
Narcolepsy is mostly caused by the gradual loss of neurons that produce hypocretin, a hormone that keeps us awake. Many scientists had suspected that the immune system was responsible, but the Stanford team has found the first direct evidence: a special group of CD4+ T cells (a type of immune cell) that targets hypocretin and is found only in people with narcolepsy.
“Up till now, the idea that narcolepsy was an autoimmune disorder was a very compelling hypothesis, but this is the first direct evidence of autoimmunity,” says Mellins. “I think these cells are a smoking gun.” The study is published today in Science Translational Medicine.
Thomas Scammell, a neurologist at Harvard Medical School in Boston, Massachusetts, says that the results are welcome after “years of modest disappointment”, marked by many failures to find antibodies made by a person’s body against their own hypocretin. “It’s one of the biggest things to happen in the narcolepsy field for some time.”
Loose ends
It is not clear why some people make these T cells and others do not, but genetics may play a part. In earlier work, Mignot showed that 98% of people with narcolepsy have a variant of the gene HLA that is found in only 25% of the general population.
Environmental factors, such as infections, probably matter too. Mellins’ working model is that narcolepsy happens when people with a genetic predisposition, which involves having several narcolepsy-related gene variants, encounter an environmental factor that mimics hypocretin, triggering a response from the immune system. The 2009 H1N1 virus was one such trigger: the team found that these same special CD4+ T cells also recognize a protein from the pandemic H1N1 virus.
Narcolepsy of course was around long before the 2009 pandemic. And since new cases of the disease tend to arise right after winter — following the seasonal peak in flu — it’s possible that other strains or even other viruses are involved, too.
But the results do not fully explain the Pandemrix mystery, because other flu vaccines contained the same proteins but did not lead to a spike in narcolepsy cases. Regardless, Mellins says that it should be possible to avoid repeating the same mistake by ensuring that future flu vaccines do not contain components that resemble hypocretin.
Another loose end is that “they don’t show how these T cells are actually killing the hypocretin neurons”, adds Scammell. “It’s like a murder mystery and we don’t know who the real killer is.” He thinks that it is unlikely that the T cells are the true culprits; instead, they could be acting through an intermediary, or might merely be a symptom of some other destructive event.
“The results are very important, but they need to do a replication study in a large group of patients and controls,” says Gert Lammers, a neurologist at Leiden University Medical Center in the Netherlands and president of the European Narcolepsy Network. “If the findings are confirmed, the first important spin-off might be the development of a new diagnostic test.”
Finnish and Danish researchers have developed a new method that performs decoding, or brain-reading, during continuous listening to real music. Based on recorded brain responses, the method predicts how certain features related to tone color and rhythm of the music change over time, and recognizes which piece of music is being listened to. The method also allows pinpointing the areas in the brain that are most crucial for the processing of music. The study was published in the journal NeuroImage.

Using functional magnetic resonance imaging (fMRI), the research team at the Finnish Centre of Excellence in Interdisciplinary Music Research in the Universities of Jyväskylä and Helsinki, and the Center for Functionally Integrative Neuroscience in Aarhus University, Denmark, recorded the brain responses of participants while they were listening to a 16-minute excerpt of the album Abbey Road by the Beatles. Following this, they used computational algorithms to extract a collection of musical features from the musical recording. Subsequently, they employed a collection of machine-learning methods to train a computer model that predicts how the features of the music change over time. Finally, they develop a classifier that predicts which part of the music the participant was listening to at each time.
The researchers found that most of the musical features included in the study could be reliably predicted from the brain data. They also found that the piece being listened to could be predicted significantly better than chance. Fairly large differences were however found between participants in terms of the prediction accuracy. An interesting finding was that areas outside of the auditory cortex, including motor, limbic, and frontal areas, had to be included in the models to obtain reliable predictions, providing thus evidence for the important role of these areas in the processing of musical features.
"We believe that decoding provides a method that complements other existing methods to obtain more reliable information about the complex processing of music in the brain", says Professor Petri Toiviainen from the University of Jyväskylä. "Our results provide additional evidence for the important involvement of emotional and motor areas in music processing."
Learning requires constant reconfiguration of the connections between nerve cells. Two new studies now yield new insights into the molecular mechanisms that underlie the learning process.

Learning and memory are made possible by the incessant reorganization of nerve connections in the brain. Both processes are based on targeted modifications of the functional interfaces between nerve cells – the so-called synapses – which alter their form, molecular composition and functional properties. In effect, connections between cells that are frequently co-activated together are progressively altered so that they respond to subsequent signals more rapidly and more strongly. This way, information can be encoded in patterns of synaptic activity and promptly recalled when needed. The converse is also true: learned behaviors can be lost by disuse, because inactive synapses are themselves less likely to transmit an incoming impulse, leading to the decay of such connections.
How exactly an individual synapse is altered without simultaneously affecting nearby nerve cells or other synapses on the same cell is a question that is central to Michael Kiebler’s research. Kiebler, a biochemist, holds the Chair of Cell Biology in the Faculty of Medicine at LMU. “It is now clear that the changes take place in the cell that is stimulated by synaptic input – the post-synaptic cell – and in particular in its so-called dendritic spines,” he says, “and particles that are known as “neuronal RNA granules” deliver mRNA molecules to these sites“. These mRNAs represent the blueprints for the synthesis of the proteins responsible for reconfiguring the synapses. Kiebler‘s team has developed a model, which postulates that these granules migrate from dendrite to dendrite, and release their mRNAs specifically at sites that are repeatedly activated. This would ensure that the relevant proteins are synthesized only where they are needed within the cell.
In spite of the potential significance of the model, the molecular mechanisms required for its realization have remained obscure. mRNA-binding proteins, including Staufen2 (Stau2) and Barentsz, are essential components of the granules, and Kiebler’s team, in collaboration with Giulio Superti-Furga’s group (CeMM, Vienna), have now used specific antibodies to isolate and characterize neuronal granules that contain either Stau2 or Barentsz.
Surprising diversity
It has generally been assumed that all neuronal RNA granules have essentially similar compositions. However, the new findings indicate that this is not the case. A comparison between Stau2- and Barentsz-containing granules reveals that they differ in about two-thirds of their proteins. “This suggests that the RNA granules are highly heterogeneous and dynamic in their composition,” says Kiebler. “And that makes sense to me, because it would mean that the granules can perform different functions depending on which mRNAs they carry.” Furthermore, the researchers have shown that the granules contain virtually none of the factors known to promote the translation of mRNAs into proteins. On the contrary, they include many molecules that repress protein synthesis. This in turn implies that the process of mRNA transport is uncoupled from the subsequent production of the proteins they encode.
In a complementary study, Kiebler’s team also characterized the mRNA cargoes associated with the granules. “Until now, none of the RNA molecules present in Stau2-containing granules in mammalian nerve cells had been defined, but we have now been able to identify many specific mRNAs,” Kiebler explains. Further experiments revealed that Stau2 stabilizes the mRNAs, allowing them to be used more often for the production of proteins. Moreover, the researchers have shown that specialized structures within these mRNAs, called “Staufen-Recognized Structures” (SRS), are essential for their recognition and stabilization by Stau2. “This allows us to propose a molecular mechanism for RNA recognition for the first time,” says Kiebler.
Taken together, the two new papers (1, 2) provide novel insights into the molecular mechanisms that underlie learning and memory. The scientists now want to dissect out the details in future studies. “In the long term, we are particularly interested in the question of how an activated synapse can alter the state of the granules and induce the production of protein,” Kiebler notes. It is becoming increasingly clear that RNA-binding proteins play essential roles in nerve cells. Disruption of their action can lead to neurodegenerative diseases and neurological dysfunction. Clearly, not only classical conditions such as Alzheimer‘s or Parkinson’s disease, in which RNA-binding proteins are always involved, but also cognitive defects or age-associated impairment of learning ability must be viewed in this context,” Kiebler concludes.
Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.
The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”
“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”
Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.
“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.
“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.
“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”
While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said.
“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.
Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.
Researchers have discovered a cause of aging in mammals that may be reversible.

The essence of this finding is a series of molecular events that enable communication inside cells between the nucleus and mitochondria. As communication breaks down, aging accelerates. By administering a molecule naturally produced by the human body, scientists restored the communication network in older mice. Subsequent tissue samples showed key biological hallmarks that were comparable to those of much younger animals.
“The aging process we discovered is like a married couple—when they are young, they communicate well, but over time, living in close quarters for many years, communication breaks down,” said Harvard Medical School Professor of Genetics David Sinclair, senior author on the study. “And just like with a couple, restoring communication solved the problem.”
This study was a joint project between Harvard Medical School, the National Institute on Aging, and the University of New South Wales, Sydney, Australia, where Sinclair also holds a position.
The findings are published Dec. 19 in Cell.
Communication breakdown
Mitochondria are often referred to as the cell’s “powerhouse,” generating chemical energy to carry out essential biological functions. These self-contained organelles, which live inside our cells and house their own small genomes, have long been identified as key biological players in aging. As they become increasingly dysfunctional overtime, many age-related conditions such as Alzheimer’s disease and diabetes gradually set in.
Researchers have generally been skeptical of the idea that aging can be reversed, due mainly to the prevailing theory that age-related ills are the result of mutations in mitochondrial DNA—and mutations cannot be reversed.
Sinclair and his group have been studying the fundamental science of aging—which is broadly defined as the gradual decline in function with time—for many years, primarily focusing on a group of genes called sirtuins. Previous studies from his lab showed that one of these genes, SIRT1, was activated by the compound resveratrol, which is found in grapes, red wine and certain nuts.

Ana Gomes, a postdoctoral scientist in the Sinclair lab, had been studying mice in which this SIRT1 gene had been removed. While they accurately predicted that these mice would show signs of aging, including mitochondrial dysfunction, the researchers were surprised to find that most mitochondrial proteins coming from the cell’s nucleus were at normal levels; only those encoded by the mitochondrial genome were reduced.
“This was at odds with what the literature suggested,” said Gomes.
As Gomes and her colleagues investigated potential causes for this, they discovered an intricate cascade of events that begins with a chemical called NAD and concludes with a key molecule that shuttles information and coordinates activities between the cell’s nuclear genome and the mitochondrial genome. Cells stay healthy as long as coordination between the genomes remains fluid. SIRT1’s role is intermediary, akin to a security guard; it assures that a meddlesome molecule called HIF-1 does not interfere with communication.
For reasons still unclear, as we age, levels of the initial chemical NAD decline. Without sufficient NAD, SIRT1 loses its ability to keep tabs on HIF-1. Levels of HIF-1 escalate and begin wreaking havoc on the otherwise smooth cross-genome communication. Over time, the research team found, this loss of communication reduces the cell’s ability to make energy, and signs of aging and disease become apparent.
“This particular component of the aging process had never before been described,” said Gomes.
While the breakdown of this process causes a rapid decline in mitochondrial function, other signs of aging take longer to occur. Gomes found that by administering an endogenous compound that cells transform into NAD, she could repair the broken network and rapidly restore communication and mitochondrial function. If the compound was given early enough—prior to excessive mutation accumulation—within days, some aspects of the aging process could be reversed.

Cancer connection
Examining muscle from two-year-old mice that had been given the NAD-producing compound for just one week, the researchers looked for indicators of insulin resistance, inflammation and muscle wasting. In all three instances, tissue from the mice resembled that of six-month-old mice. In human years, this would be like a 60-year-old converting to a 20-year-old in these specific areas.
One particularly important aspect of this finding involvesHIF-1. More than just an intrusive molecule that foils communication, HIF-1 normally switches on when the body is deprived of oxygen. Otherwise, it remains silent. Cancer, however, is known to activate and hijack HIF-1. Researchers have been investigating the precise role HIF-1 plays in cancer growth.
“It’s certainly significant to find that a molecule that switches on in many cancers also switches on during aging,” said Gomes. “We’re starting to see now that the physiology of cancer is in certain ways similar to the physiology of aging. Perhaps this can explain why the greatest risk of cancer is age.”
“There’s clearly much more work to be done here, but if these results stand, then certain aspects of aging may be reversible if caught early,” said Sinclair.
The researchers are now looking at the longer-term outcomes of the NAD-producing compound in mice and how it affects the mouse as a whole. They are also exploring whether the compound can be used to safely treat rare mitochondrial diseases or more common diseases such as Type 1 and Type 2 diabetes. Longer term, Sinclair plans to test if the compound will give mice a healthier, longer life.
Newcastle University scientists have discovered that as the brain re-organises connections throughout our life, the process begins earlier in girls which may explain why they mature faster during the teenage years.

As we grow older, our brains undergo a major reorganisation reducing the connections in the brain. Studying people up to the age of 40, scientists led by Dr Marcus Kaiser and Ms Sol Lim at Newcastle University found that while overall connections in the brain get streamlined, long-distance connections that are crucial for integrating information are preserved.
The researchers suspect this newly-discovered selective process might explain why brain function does not deteriorate – and indeed improves –during this pruning of the network. Interestingly, they also found that these changes occurred earlier in females than in males.
Explaining the work which is being published in Cerebral Cortex, Dr Kaiser, Reader in Neuroinformatics at Newcastle University, says: “Long-distance connections are difficult to establish and maintain but are crucial for fast and efficient processing. If you think about a social network, nearby friends might give you very similar information – you might hear the same news from different people. People from different cities or countries are more likely to give you novel information. In the same way, some information flow within a brain module might be redundant whereas information from other modules, say integrating the optical information about a face with the acoustic information of a voice is vital in making sense of the outside world.”
Brain “pruned”
The researchers at Newcastle, Glasgow and Seoul Universities evaluated the scans of 121 healthy participants between the ages of 4 and 40 years as this is where the major connectivity changes can be seen during this period of maturation and improvement in the brain. The work is part of the EPSRC-funded Human Green Brain project which examines human brain development.
Using a non-invasive technique called diffusion tensor imaging – a special measurement protocol for Magnetic Resonance Imaging (MRI) scanners – they demonstrated that fibres are overall getting pruned that period.
However, they found that not all projections (long-range connections) between brain regions are affected to the same extent; changes were influenced differently depending on the types of connections.
Projections that are preserved were short-cuts that quickly link different processing modules, e.g. for vision and sound, and allow fast information transfer and synchronous processing. Changes in these connections have been found in many developmental brain disorders including autism, epilepsy and schizophrenia.
The researchers have demonstrated for the first time that the loss of white matter fibres between brain regions is a highly selective process – a phenomenon they call preferential detachment. They show that connections between distant brain regions, between brain hemispheres, and between processing modules lose fewer nerve fibres during brain maturation than expected. The researchers say this may explain how we retain a stable brain network during brain maturation.
Commenting on the fact that these changes occurred earlier in females than males, Ms Sol Lim explains: “The loss of connectivity during brain development can actually help to improve brain function by reorganizing the network more efficiently. Say instead of talking to many people at random, asking a couple of people who have lived in the area for a long time is the most efficient way to know your way. In a similar way, reducing some projections in the brain helps to focus on essential information.”
Measuring changes in certain proteins — called biomarkers — in people with amyotrophic lateral sclerosis may better predict the progression of the disease, according to scientists at Penn State College of Medicine.
ALS is often referred to as Lou Gehrig’s disease, is a neurological disease in which the brain loses its ability to control movement as motor neurons degenerate. The course of the disease varies, with survival ranging from months to decades.
"The cause of most cases of ALS remains unknown," said James Connor, Distinguished Professor of Neurosurgery, Neural and Behavioral Sciences and Pediatrics. "Although several genetic and environmental factors have been identified, each accounts for only a fraction of the total cases of ALS."
This clinical variation in patients presents challenges in terms of managing the disease and developing new treatments. Finding relevant biomarkers, which are objective measures that reflect changes in biological processes or reactions to treatments, may help address these challenges.
The project was led by Xiaowei Su, an M.D./ Ph.D. student in Connor’s laboratory, in collaboration with Zachary Simmons, director of the Penn State Hershey ALS Clinic and Research Center. Su studied plasma and cerebrospinal fluid samples previously collected from patients undergoing diagnostic evaluation, who were later identified as having ALS. Analysis shows that looking at multiple biomarkers to predict progression is not only mathematically possible, it improves upon methods using single biomarkers.
Statistical models analyzing plasma had reasonable ability to predict total disease duration and used seven relevant biomarkers. For example, higher levels of the protein IL-10 predict a longer disease duration. IL-10 is involved with anti-inflammation, suggesting that lower levels of inflammation are associated with a longer disease duration.
The researchers identified six biomarkers for cerebrospinal fluid. For example, higher levels of G-CSF — a growth factor known to have protective effects on motor neurons, the cells that die in ALS — predicts a longer disease duration.
Perhaps most importantly, the results suggest that a combination of biomarkers from both plasma and cerebrospinal fluid better predict disease duration.
While the size of this study is small, the ability of the specific biomarkers used to predict prognosis suggests that the approach holds promise.
"The results argue for the usefulness of researching this approach for ALS both in terms of predicting disease progression and in terms of determining the impact of therapeutic strategies," Connor said. "The results present a compelling starting point for the use of this method in larger studies and provide insights for novel therapeutic targets."
Imagine kicking a cocaine addiction by simply popping a pill that alters the way your brain processes chemical addiction. New research from the University of Pittsburgh suggests that a method of biologically manipulating certain neurocircuits could lead to a pharmacological approach that would weaken post-withdrawal cocaine cravings. The findings have been published in Nature Neuroscience.

Researchers led by Pitt neuroscience professor Yan Dong used rat models to examine the effects of cocaine addiction and withdrawal on nerve cells in the nucleus accumbens, a small region in the brain that is commonly associated with reward, emotion, motivation, and addiction. Specifically, they investigated the roles of synapses—the structures at the ends of nerve cells that relay signals.
When an individual uses cocaine, some immature synapses are generated, which are called “silent synapses” because they send few signals under normal physiological conditions. After that individual quits using cocaine, these “silent synapses” go through a maturation phase and acquire the ability to send signals. Once they can send signals, the synapses will send craving signals for cocaine if the individual is exposed to cues that previously led him or her to use the drug.
The researchers hypothesized that if they could reverse the maturation of the synapses, the synapses would remain silent, thus rendering them unable to send craving signals. They examined a chemical receptor known as CP-AMPAR that is essential for the maturation of the synapses. In their experiments, the synapses reverted to their silent states when the receptor was removed.
“Reversing the maturation process prevents the intensification process of cocaine craving,” said Dong, the study’s corresponding author and assistant professor of neuroscience in Pitt’s Kenneth P. Dietrich School of Arts and Sciences. “We are now developing strategies to maintain the ‘reversal’ effects. Our goal is to develop biological and pharmacological strategies to produce long-lasting de-maturation of cocaine-generated silent synapses.”
For some cancer patients, the mental fogginess that develops with chemotherapy lingers long after treatment ends. Now research in breast cancer patients may offer an explanation.

Patients who experience “chemobrain” following treatment for breast cancer show disruptions in brain networks that are not present in patients who do not report cognitive difficulties, according to researchers at Washington University School of Medicine in St. Louis.
Results of the small study were reported Thursday, Dec. 12 at a poster presentation at the San Antonio Breast Cancer Symposium.
According to the researchers, many breast cancer patients who receive chemotherapy report long-term problems with memory, attention, learning, visual-spatial skills and other forms of information processing. The brain mechanisms contributing to these difficulties are poorly understood.
The investigators used an imaging technique called resting state functional-connectivity magnetic resonance imaging (rs-fcMRI) to assess the wiring among regions of the brain in 28 patients treated at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University. Fifteen patients reported they were “extremely” or “strongly” affected by cognitive difficulties. The remaining 13 reported no cognitive impairment.
The imaging studies suggest that standard chemotherapy given to breast cancer patients may alter connectivity in brain networks, especially in the frontal parietal control regions responsible for executive function, attention and decision-making.
“Chemobrain is most likely a global phenomenon in the brain, but a set of regions involved in executive control, called the frontal-parietal network, is perhaps the most affected brain system,” said Jay F. Piccirillo, MD, professor of otolaryngology and a member of the research team with expertise in the use of brain imaging to study tinnitus, or phantom noise. “We’re confirming previous studies that also have shown this. And we’re developing a solid multidisciplinary working group at Washington University to determine how we can help these women.”
Other studies also have used neuroimaging techniques to observe the neural disruptions associated with Alzheimer’s disease, depression and stroke. Washington University researchers are beginning to investigate whether cancer patients experiencing chemobrain may benefit from therapies similar to those that help patients with other cognitive disorders.
New research from the Norwegian University of Science and Technology shows that if you want to be good at math, you have to practice all different kinds of maths.

What makes someone good at math? A love of numbers, perhaps, but a willingness to practice, too. And even if you are good at one specific type of math, you can’t trust your innate abilities enough to skip practicing other types if you want to be good.
New research at the Norwegian University of Science and Technology (NTNU) in Trondheim could have an effect on how math is taught. If you want to be really good at all types of math, you need to practice them all. You can’t trust your innate natural talent to do most of the job for you.
This might seem obvious to some, but it goes against the traditional view that if you are good at math, it is a skill that you are simply born with.
Professor Hermundur Sigmundsson at Department of Psychology is one of three researchers involved in the project. The results have been published in Psychological Reports.
The numbers
The researchers tested the math skills of 70 Norwegian fifth graders, aged 10.5 years on average. Their results suggest that it is important to practice every single kind of math subject to be good at all of them, and that these skills aren’t something you are born with.
“We found support for a task specificity hypothesis. You become good at exactly what you practice,” Sigmundsson says.
Nine types of math tasks were tested, from normal addition and subtraction, both orally and in writing, to oral multiplication and understanding the clock and the calendar.
“Our study shows little correlation between (being good at) the nine different mathematical skills, Sigmundsson said. “For instance there is little correlation between being able to solve a normal addition in the form of ‘23 + 67’ and addition in the form of a word problem.”
This example might raise a few eyebrows. Perhaps basic math is not a problem for the student, but the reading itself is. Up to 20 per cent of Norwegian boys in secondary school have problems with reading.
Sigmundsson also finds support in everyday examples.
“Some students will be good at geometry, but not so good at algebra,” he says.
If that is the case they have to practice more algebra, which is the area where most students in secondary school have problems.
“At the same time this means there is hope for some students. Some just can’t be good at all types of math, but at least they can be good at geometry, for example,” he says.
It is this finding that might in the end help change the way math is taught.
Support in neurology
The fact that you are good at precisely what you practice is probably due to the fact that different kinds of practice activate different neural connections.
The results can also be transferred to other areas. The football player who practices hitting the goal from 25 yards with a perfectly placed shot will become good at exactly this. But she is not necessarily good at tackling or reading the game.
“This is also supported by new insights in neurology. With practice you develop specific neural connections,” says Sigmundsson.
Team at IST Austria examines synaptic mechanisms of rhythmic brain waves • Achievement possible through custom-design tools developed in collaboration with the institute’s Miba machine shop

How information is processed and encoded in the brain is a central question in neuroscience, as it is essential for high cognitive function such as learning and memory. Theta-gamma oscillations are “brain waves” observed in the hippocampus of behaving rats, a brain region involved in learning and memory. In rodents, theta-gamma oscillations are associated with information processing during exploration and spatial navigation. However, the underlying synaptic mechanisms have so far remained unclear. In research published this week in the journal Neuron, postdoc Alejandro Pernía-Andrade and Professor Peter Jonas, both at the Institute of Science and Technology Austria (IST Austria), discovered the synaptic mechanisms underlying oscillations at the dentate gyrus (main entrance of the hippocampus). Furthermore, the researchers suggest a role for these oscillations in the coding of information by the dentate gyrus principal neurons. Thus, these findings contribute to a better understanding of how information is processed in the brain.
Brain oscillations are, in fact, rhythmic changes in voltage in the extracellular space, referred to as electrical brain signals associated with the processing of information. These electrical signals are similar to those seen in electro-encephalographic recordings (EEG) in humans. Pernía-Andrade and Jonas observed these oscillations in a brain region called the hippocampus in behaving rats, and recorded oscillations occurring in this area using extracellular probes. To understand how oscillations are generated and which synaptic events trigger these oscillations, the researchers looked at synaptic transmission in granule cells (principal cells at the main entrance of the hippocampus) from both the extracellular (oscillations) and the intracellular perspectives (synaptic currents and neuronal firing), and then correlated the two. They discovered that excitatory and inhibitory synaptic signals contributed to different frequencies of oscillations, with excitation from the entorhinal cortex generating theta oscillations and inhibition by local dentate gyrus interneurons generating gamma oscillations. Together, excitation and inhibition provide the rhythmic signals of oscillations. It has been speculated that oscillations may help the dentate gyrus to encode information by acting as reference signals in temporal coding. Pernía-Andrade and Jonas now show that granule cell neurons send signals only at specific times in the cycle of oscillations. This so-called “phase locking” is necessary if oscillations are to function as reference signals in temporal coding.
The precise, high-resolution recording from granule cells necessary for these discoveries was possible only through technological innovations by Pernía-Andrade and Jonas, as previously no equipment was available to record synaptic signals in active rats in such high resolution. They are the result of a collaboration with the Miba machine shop, IST Austria’s electrical and mechanical SSU (Scientific Service Unit). Adapting commercially available equipment and custom-designing tools, Pernía-Andrade, Jonas and Todor Asenov, manager of the Miba machine shop, produced the first tools for precise biophysical analysis in active rats. This research is therefore not only a scientific advance but also represents a significant technological and conceptual progress in the quest to understand neuronal behavior under natural conditions.
TAU researchers find unresponsive patients’ brains may recognize photographs of their family and friends

Patients in a vegetative state are awake, breathe on their own, and seem to go in and out of sleep. But they do not respond to what is happening around them and exhibit no signs of conscious awareness. With communication impossible, friends and family are left wondering if the patients even know they are there.
Now, using functional magnetic resonance imaging (fMRI), Dr. Haggai Sharon and Dr. Yotam Pasternak of Tel Aviv University’s Functional Brain Center and Sackler Faculty of Medicine and the Tel Aviv Sourasky Medical Center have shown that the brains of patients in a vegetative state emotionally react to photographs of people they know personally as though they recognize them.
"We showed that patients in a vegetative state can react differently to different stimuli in the environment depending on their emotional value," said Dr. Sharon. "It’s not a generic thing; it’s personal and autobiographical. We engaged the person, the individual, inside the patient."
The findings, published in PLOS ONE, deepen our understanding of the vegetative state and may offer hope for better care and the development of novel treatments. Researchers from TAU’s School of Psychological Sciences, Department of Neurology, and Sagol School of Neuroscience and the Loewenstein Hospital in Ranaana contributed to the research.
Talking to the brain
For many years, patients in a vegetative state were believed to have no awareness of self or environment. But in recent years, doctors have made use of fMRI to examine brain activity in such patients. They have found that some patients in a vegetative state can perform complex cognitive tasks on command, like imagining a physical activity such as playing tennis, or, in one case, even answering yes-or-no questions. But these cases are rare and don’t provide any indication as to whether patients are having personal emotional experiences in such a state.
To gain insight into “what it feels like to be in a vegetative state,” the researchers worked with four patients in a persistent (defined as “month-long”) or permanent (persisting for more than three months) vegetative state. They showed them photographs of people they did and did not personally know, then gauged the patients’ reactions using fMRI, which measures blood flow in the brain to detect areas of neurological activity in real time. In response to all the photographs, a region specific to facial recognition was activated in the patients’ brains, indicating that their brains had correctly identified that they were looking at faces.
But in response to the photographs of close family members and friends, brain regions involved in emotional significance and autobiographical information were also activated in the patients’ brains. In other words, the patients reacted with activations of brain centers involved in processing emotion, as though they knew the people in the photographs. The results suggest patients in a vegetative state can register and categorize complex visual information and connect it to memories – a groundbreaking finding.
The ghost in the machine
However, the researchers could not be sure if the patients were conscious of their emotions or just reacting spontaneously. So they then verbally asked the patients to imagine their parents’ faces. Surprisingly, one patient, a 60-year-old kindergarten teacher who was hit by a car while crossing the street, exhibited complex brain activity in the face- and emotion-specific brain regions, identical to brain activity seen in healthy people. The researchers say her response is the strongest evidence yet that vegetative-state patients can be “emotionally aware.” A second patient, a 23-year-old woman, exhibited activity just in the emotion-specific brain regions. (Significantly, both patients woke up within two months of the tests. They did not remember being in a vegetative state.)
"This experiment, a first of its kind, demonstrates that some vegetative patients may not only possess emotional awareness of the environment but also experience emotional awareness driven by internal processes, such as images," said Dr. Sharon.
Research focused on the “emotional awareness” of patients in a vegetative state is only a few years old. The researchers hope their work will eventually contribute to improved care and treatment. They have also begun working with patients in a minimally conscious state to better understand how regions of the brain interact in response to familiar cues. Emotions, they say, could help unlock the secrets of consciousness.
A faultily formed memory sounds like hitting random notes on a keyboard while a proper one sounds more like a song, scientists say.

When they turned off a major switch for learning and memory, brain cells communicated, but the relationship was superficial, said Dr. Joe Tsien, neuroscientist at the Medical College of Georgia at Georgia Regents University and Co-Director of the GRU Brain & Behavior Discovery Institute.
“We have begun to crack the neural code, which allows us to look in real time at how thoughts happen and how memories are made,” Tsien said. “That has enabled us to understand for the first time how and whether the right keys are struck at the right time and in the right place and manner to make the beautiful sound of coherent memories and to compare what happens when a key element is missing.”
With the NMDA receptor intact, chatter reverberates, associations are made and helpful memories – like how touching a hot stove results in a burn – are easily retrieved.
“You see a face and think of a name, you see your office, and you think you need to work; everything is associative,” said Tsien, corresponding author of the study in the journal PLOS ONE. “But in mice lacking an NMDA receptor, you can tell the memory patterns are dull and dissociated.”
Using the century-old Pavlovian conditioning model that first showed how repetition creates association, they found that mice lacking a functioning NMDA receptor in the hippocampus, the brain’s center of learning and memory, could not recollect even something fearful.
When they played a tone, followed 20 seconds later by a mild foot shock, normal mice quickly made the association, down to the timing. The connection essentially never registered with mice lacking the NMDA receptor. Healthy brain recalling memories and Amnesic brain recalling contextual memories
“They form the initial patterns, but don’t rehearse them,” said Tsien. “Their tones are flat, the association is poor, while everything we register in the healthy brain is associative.” To illustrate just how flat, Postdoctoral Fellow Hui Kuang assigned musical notes to the memory activity of each, which resulted in random noise by the NMDA knockout mice compared to a dynamic rhythm from normal mice.
“By knowing what these patterns look like and what they mean, you can use this signature to measure, for example, during aging, why we begin to lose memory and to identify and test drugs that are truly effective at aiding memory,” Tsien said.
“You can tell whether there is an issue with reverberation, whether your brain is repeating what you need to remember, or repeats it but somehow stores it badly, so it’s not associated with the right things. This study has revealed a lot of fascinating details about what neuroscientists call the brain’s neural code” Tsien said.”
He wants to look at how aging affects these processes as a next step. The research team also is looking at Doogie, a mouse genetically bred by Tsien and his team in 1999 to be exceptionally smart, to see if they can also learn more about how super memories are made and what they look like.
This ability to decode how and what the brain is remembering, should one day help physicians better assess and treat conditions such as Alzheimer’s and schizophrenia, Tsien said. They may find that some answers are already out there, such as drugs that boost reverberation, or a stimulant like caffeine to help retrieve a memory, Tsien said.
His team first reported decoding brain cell conversations as memories were formed and recalled in PLOS ONE in 2009. As with the new study, they used a computational algorithm to translate the neuronal conversations into some of the first pictures of what memories look like.