
A micrograph of a killer T cell, a white blood cell that destroys germs or cancers, but that can sometimes attack the body’s own normal cells.
Misguided killer T cells may be the missing link in sustained tissue damage in the brains and spines of people with multiple sclerosis, findings from the University of Washington reveal. Cytoxic T cells, also known as CD8+ T cells, are white blood cells that normally are in the body’s arsenal to fight disease.
Multiple sclerosis is characterized by inflamed lesions that damage the insulation surrounding nerve fibers and destroy the axons, electrical impulse conductors that look like long, branching projections. Affected nerves fail to transmit signals effectively.
Intriguingly, the UW study, published this week in Nature Immunology, also raises the possibility that misdirected killer T cells might at other times act protectively and not add to lesion formation. Instead they might retaliate against the cells that tried to make them mistake the wrappings around nerve endings as dangerous.
Scientists Qingyong Ji and Luca Castelli performed the research with Joan Goverman, UW professor and chair of immunology. Goverman is noted for her work on the cells involved in autoimmune disorders of the central nervous system and on laboratory models of multiple sclerosis.
Multiple sclerosis generally first appears between ages 20 to 40. It is believed to stem from corruption of the body’s normal defense against pathogens, so that it now attacks itself. For reasons not yet known, the immune system, which wards off cancer and infection, is provoked to vandalize the myelin sheath around nerve cells. The myelin sheath resembles the coating on an electrical wire. When it frays, nerve impulses are impaired.
Depending on which nerves are harmed, vision problems, an inability to walk, or other debilitating symptoms may arise. Sometimes the lesions heal partially or temporarily, leading to a see-saw of remissions and flare ups. In other cases, nerve damage is unrelenting.
The myelin sheaths on nerve cell projections are fashioned by support cells called oligodendrocytes. Newborn’s brains contain just a few sections with myelinated nerve cells. An adult’s brains cells are not fully myelinated until age 25 to 30.
For T cells to recognize proteins from a pathogen, a myelin sheath or any source, other cells must break the desired proteins into small pieces, called peptides, and then present the peptides in a specific molecular package to the T cells. Scientists had previously determined which cells present pieces of a myelin protein to a type of T cell involved in the pathology of multiple sclerosis called a CD4+ T cell. Before the current study, no cells had yet been found that present myelin protein to CD8+ T cells.
Scientists strongly suspect that CD8+ T cells, whose job is to kill other cells, play an important role in the myelin-damage of multiple sclerosis. In experimental autoimmune encephalitis, which is a mouse model of multiple sclerosis in humans, CD4+ T cells have a significant part in the inflammatory response. However, scientists observed that, in acute and chronic multiple sclerosis lesions, CD8+T cells actually outnumber CD4+ T cells and their numbers correlate with the extent of damage to nerve cell projections. Other studies suggest the opposite: that CD8+ T cells may tone down the myelin attack.
The differing observations pointed to a conflicting role for CD8+ T cells in exacerbating or ameliorating episodes of multiple sclerosis. Still, how CD8+ T cells actually contributed to regulating the autoimmune response in the central nervous system, for better or worse, was poorly understood.

TIP dendritic cells, stained to show their physical features.
Goverman and her team showed for the first time that naive CD8+ T cells were activated and turned into myelin-recognizing cells by special cells called Tip-dendritic cells. These cells are derived from a type of inflammatory white blood cell that accumulates in the brain and the spinal cord during experimental autoimmune encephalitis originally mediated by CD4+ T cells. The membrane folds and protrusions of mature dendritic cells often look like branched tentacles or cupped petals well-suited to probing the surroundings.
The researchers proposed that the Tip dendritic cells can not only engulf myelin debris or dead oligodendrocytes and then present myelin peptides to CD4+ T cells, they also have the unusual ability to load a myelin peptide onto a specific type of molecule that also presents it to CD8+ T cells. In this way, the Tip dendritic cells can spread the immune response from CD4+ T cells to CD8+ T cells. This presentation enables CD8+ T cells to recognize myelin protein segments from oligodendrocytes, the cells that form the myelin sheath. The phenomenon establishes a second-wave of autoimmune reactivity in which the CD8+ T cells respond to the presence of oligodendrocytes by splitting them open and spilling their contents.
“Our findings are consistent,” the researchers said, “with the critical role of dendritic cells in promoting inflammation in autoimmune diseases of the central nervous system.” They mentioned that mature dendritic cells might possibly wait in the blood vessels of normal brain tissue to activate T-cells that have infiltrated the blood/brain barrier.
The oligodendrocytes, under the inflammatory situation of experimental autoimmune encephalitis, also present peptides that elicit an immune response from CD8+ T cells. Under healthy conditions, oligodendrocytes wouldn’t do this.
The researchers proposed that myelin-specific CD8+ T cells might play a role in the ongoing destruction of nerve-cell endings in “slow burning” multiple sclerosis lesions. A drop in inflammation accompanied by an increased degeneration of axons (electrical impulse-conducting structures) coincides with multiple sclerosis leaving the relapsing-remitting stage of disease and entering a more progressive state.
Medical scientists are studying the roles of a variety of immune cells in multiple sclerosis in the hopes of discovering pathways that could be therapeutic targets to prevent or control the disease, or to find ways to harness the body’s own protective mechanisms. This could lead to highly specific treatments that might avoid the unpleasant or dangerous side effects of generalized immunosuppressants like corticosteroids or methotrexate.
Researchers found information can be better retained with reinforcing stimuli delivered during sleep

When you’re studying for an exam, is there something you can do while you sleep to retain the information better?
"The question is, ‘What determines which information is going to be kept and which information is lost?’" says neuroscientist Ken Paller.
With support from the National Science Foundation (NSF), Paller and his team at Northwestern University are studying the connection between memory and sleep, and the possibilities of boosting memory storage while you snooze.
"We think many stages of sleep are important for memory. However, a lot of the evidence has shown that slow-wave sleep is particularly important for some types of memory," explains Paller.
Slow-wave sleep is often referred to as “deep sleep,” and consists of stages 3 and 4 of non-rapid-eye-movement sleep.
Paller’s lab group members demonstrated for Science Nation two of the tests they run on study participants. In the first experiment, the subjects learned two pieces of music in a format similar to the game Guitar Hero. During a short nap following learning, just one of the learned tunes was played softly several times, to selectively reinforce the memory for playing that tune without any reinforcement but not for the other tune. Paller wanted to know whether the test subjects could more accurately produce the tune played during sleep.
In the second exercise, the subjects were asked to memorize the location of 50 objects on a computer screen. The presentation of each object was coupled with a unique sound. During the post-learning nap, memory for the location of 25 objects was reinforced by the play-back of only 25 of the sounds. In this case, Paller wanted to know whether the subjects could remember object locations better if the associated sounds were played during sleep.
Researchers recorded electrical activity generated in the brain using EEG electrodes attached to the scalp. They thus determined whether the subjects entered “deep sleep,” and only those who did participated in the reinforcement experiments. In both experiments, participants did a better job remembering what was reinforced while they slept, compared to what was not reinforced.
"We think that memory processing happens during sleep every night," says Paller. "We’re at the beginning of finding out what types of memory can be reinforced, how large reinforcement effects can be, and what sorts of stimuli can be used to reactivate memories so that they can be better consolidated."
Paller’s goal is to better understand the fundamental brain mechanisms responsible for memory. And that, in turn, may help people with memory problems, including those who find themselves more forgetful as they age.
"We experience progressively less slow-wave sleep as we age. Of course, many brain mechanisms come into play to allow us to remember, including some processing that transpires during sleep. So, there’s a lot to figure out about how memory works, but I think it’s fair to say that the person you are when you’re awake is partly a function of what your brain does when you’re asleep," explains Paller. He says these reactivation techniques could turn out to be valuable for enhancing what people have learned.
"What is beautiful about this set of experiments is that Dr. Paller identified ‘deep sleep’ as a critical time window during which memory for specific experiences can be selectively enhanced by the method of reactivation without conscious effort," says Akaysha Tang, director of the cognitive neuroscience program in the NSF Directorate for Social, Behavioral and Economic Sciences.
"Normally, conscious rehearsal of memorized material is needed if one wants to remember something better or retain it for longer, and one has to find time to review or rehearse," continues Tang. "Dr. Paller and the members of his lab group showed that such selective enhancement could be achieved without conscious effort and without demanding more of one’s waking hours. So, instead of pulling that all-nighter to memorize the material, in the future, it may be possible to consolidate the memory by sleeping with a scientifically programmed lullaby!"
Neurobiologists at the Friedrich Miescher Institute for Biomedical Research (FMI) are the first to show that directional migration of neurons during brain development is controlled through epigenetic processes. In an elaborate study bridging epigenetics and neurobiology, the scientists found that the migratory pattern is orchestrated through epigenetic regulation of genes within neurons and spatial signals in the environment. Their study has been published in Science.

As the foundation for our mind is laid, 100 billion cells are formed and appropriately connected in the brain. Despite the huge number of cells, no aspect of this process is left entirely to chance. Neurons divide, take on defined identities, migrate to the correct nodes in the network and send out their connecting axons along predefined paths to make contact with specific target neurons. The blueprint for these arrangements is encoded in the genome. However, how coordinated transcription of genes is finely tuned to achieve the precision of these processes is not yet clear.
A study by the research group of Filippo Rijli, group leader at the FMI and Professor of Neurobiology at the University of Basel, shows now for the first time that long-distance neuronal migration in the developing brain is regulated through transcriptional programs that are epigenetically controlled.
In their study published in Science, the neurobiologists have looked at a part of the brain called the brain stem and, in particular, at the neuronal ensembles forming the so-called precerebellar pontine nuclei. These nuclei are particularly important for the relay of information from the sensory and motor cortex to the cerebellum. During development, neurons, which will gather to form the pontine nuclei, migrate a long way from a distant progenitor compartment to their final positions, where they form connections that are vital for coordinated movement. The migratory path of these cells is defined by the relative position of the neuron in the progenitor compartment and is controlled by its specific combinatorial expression of Hox genes. Hox genes encode transcription factors and play an important role in many developmental processes that rely on a body plan and confer cellular identity.
It has been known that neurons in the precerebellar pontine nuclei start to migrate in the wrong direction as soon as their Hox identity has been disrupted. The Rijli team has now shown that epigenetic processes control the maintenance of appropriate Hox expression during migration. The key player in this scenario is a major contributor to mammalian epigenetic control, the histone methyl-transferase Ezh2. Ezh2 methylates histones and silences specific stretches of DNA, thus maintaining certain Hox genes repressed, while allowing expression of others.
Ezh2 also regulates the appropriate response to environmental clues that direct neuronal migration. The cells in the brain stem bathe in a sea of attractants and repellants. They respond to these stimuli depending on their identity and adapt their migratory paths. Rijli and colleagues found that Ezh2 controls transcription of both environmental Netrin, a neuronal attractant molecule and of its repellant receptor Unc5b in migrating neurons, such that the appropriate balance between attraction and repulsion is maintained throughout migration to keep neurons on track.
“Being able to link epigenetic regulation with a complex process such as long-distance directional neuronal migration during brain development is extremely exciting,” comments Rijli. “All the more we were delighted to see that the migratory pattern is not only epigenetically maintained through an intrinsic program established in the progenitor, but is also coordinated with an Ezh2-dependent silencing program that regulates the spatial distribution of extrinsic signals in the migratory environment. The knowledge gained from our studies contributes as well to our understanding of certain neurological syndromes that are caused by faulty neuronal migration and are currently incurable.”
People who are depressed after a stroke may have a tripled risk of dying early and four times the risk of death from stroke than people who have not experienced a stroke or depression, according to a study released today that will be presented at the American Academy of Neurology’s 65th Annual Meeting in San Diego, March 16 to 23, 2013. “Up to one in three people who have a stroke develop depression,” said study author Amytis Towfighi, MD, with the Keck School of Medicine of the University of Southern California and Rancho Los Amigos National Rehabilitation Center in Los Angeles, and a member of the American Academy of Neurology. “This is something family members can help watch for that could potentially save their loved one.”
Towfighi noted that similar associations have been found regarding depression and heart attack, but less is known about the association between stroke, depression and death.
The research included 10,550 people between the ages of 25 and 74 followed for 21 years. Of those, 73 had a stroke but did not develop depression, 48 had stroke and depression, 8,138 did not have a stroke or depression and 2,291 did not have a stroke but had depression.
After considering factors such as age, gender, race, education, income level and marital status, the risk of dying from any cause was three times higher in individuals who had stroke and depression compared to those who had not had a stroke and were not depressed. The risk of dying from stroke was four times higher among those who had a stroke and were depressed compared to people who had not had a stroke and were not depressed.
“Our research highlights the importance of screening for and treating depression in people who have experienced a stroke,” said Towfighi. “Given how common depression is after stroke, and the potential consequences of having depression, looking for signs and symptoms and addressing them may be key.”
Among the most feared and devastating strokes are ones caused by blockages in the brain’s critical basilar artery system. When not fatal, basilar artery strokes can cause devastating deficits, including head-to-toe paralysis called “locked-in syndrome.”
However, a minority of patients can have good outcomes, especially with new MRI technologies and time-sensitive treatments. These treatments include the clot-busting drug tissue plasminogen activator (tPA), and various new-generation neurothrombectomy devices, according to a review article in MedLink Neurology by three Loyola University Medical Center neurologists.
About 85 percent of strokes are ischemic, meaning they are caused by blockages in blood vessels. (The remaining strokes are caused by bleeding in the brain.) About 4 percent of all ischemic strokes are caused by blockages in the basilar artery system. The basilar artery supplies oxygen-rich blood to some of the most critical parts of the brain.
The first clinical description of a basilar artery stroke was reported in 1868, according to the MedLink article, which was written by Loyola neurologists Sarkis Morales Vidal, MD, (first author); Murray Flaster, MD, PhD; and Jose Biller, MD; and edited by Steven R. Levine, MD, of the SUNY Health Science Center.
A character in Alexandre Dumas’ novel, “The Count of Monte Cristo,” described as a “corpse with living eyes,” had what appears to be locked-in syndrome. More recently, the book and movie “The Diving Bell and the Butterfly” describe a French journalist with locked-in syndrome. The journalist was mentally intact, but able to move only his left eyelid. He composed a moving memoir by picking out one letter at a time as the alphabet was slowly recited.
The MedLink article reports that an estimated 80 percent of locked-in patients live for at least five years, and some patients have survived for more than 20 years. One survey of long-term survivors found that 86 percent reported their attention level was good, 77 percent were able to read and 66 percent could communicate with eye movements and blinking. Forty-eight percent reported their mood was good.
The review article cites a study of basilar artery stroke patients that found that a month after the stroke, one-third of patients were dead and one-third needed help for activities of daily living such as bathing, dressing and eating.
Most basilar artery strokes are caused by atherosclerosis (hardening of the arteries). The second-leading cause is clots.
Leading risk factors for basilar artery strokes are high blood pressure, diabetes, smoking, high cholesterol, coronary artery disease and peripheral vascular disease. Affected individuals tend to be over age 50. Basilar artery strokes are more common in men than in women.
Dr. Morales is an assistant professor, Dr. Flaster is an associate professor and Dr. Biller is a professor and chair in the Department of Neurology of Loyola University Chicago Stritch School of Medicine.
Horrific images from One Flew Over the Cuckoo’s Nest notwithstanding, modern electroconvulsive therapy (ECT) remains one of the safest and most effective antidepressant treatments, particularly for patients who do not tolerate antidepressant medications or depression symptoms that have failed to respond to antidepressant medications.
Since its introduction in the 1930s, ECT has evolved into a more refined, but more expensive and extensively regulated clinical procedure. Each treatment involves the assembly of a multidisciplinary clinical team and the use of a highly specialized device to deliver brief pulses of low dose electric currents to the brain. ECT is performed while the patient is under general anesthesia and, depending upon each individual’s response, is usually administered 2-3 times a week for 6-12 sessions.
A new study in Biological Psychiatry suggests that reductions in ECT treatment have an economic basis. From 1993 - 2009, there was a progressive decline in the number of hospitals offering ECT treatment, resulting in an approximately 43% drop in the number of psychiatric inpatients receiving ECT.
Using diagnostic and discharge codes from survey data compiled annually from US hospitals, researchers calculated the annual number of inpatient stays involving ECT and the annual number of hospitals performing the procedure.
Lead author Dr. Brady Case, from Bradley Hospital and Brown University, said, “Our findings document a clear decline in the capacity of US general hospitals - which provide the majority of inpatient mental health care in this country - to deliver an important treatment for some of their most seriously ill patients. Most Americans admitted to general hospitals for severe recurrent major depression are now being treated in facilities which do not conduct ECT.”
This is the consequence of an approximately 15 year trend in which psychiatric units appear to be discontinuing use of the procedure. The percentage of hospitals with psychiatric units which conduct ECT dropped from about 55% in 1993 to 35% in 2009, which has led to large reductions in the number of inpatients receiving ECT.
Analyses of treatment for inpatients with severe, recurrent depression indicate the changes have equally affected inpatients with indications like psychotic depression and with relative medical contraindications, suggesting declines have been clinically indiscriminate. By contrast, non-clinical patient factors like residence in a poor neighborhood and lack of private insurance have remained important predictors of whether patients’ treating hospitals conduct ECT, raising the concern of systemic barriers to ECT for the disadvantaged.
Where hospitals have continued to conduct the procedure, use has remained stable, indicating divergence in the care of patients treated in the large academic facilities most likely to conduct ECT and those treated elsewhere.
"Psychiatry has taken a step backward. The suffering and disability associated with antidepressant-resistant depression constitute a profound burden on the patient, their family, and society. ECT remains the gold standard treatment for treatment-resistant depression," commented Dr. John Krystal, Editor of Biological Psychiatry. "We must insure that patients with the greatest need for definitive treatment have access to this type of care. ECT may be one of the oldest treatments for depression, but its role in treatment has been given new life in light of a generation of research that has outlined molecular signatures of ECT’s antidepressant efficacy."
Scientists have shed light on how mechanisms in the brain work to give us a sense of location. Research at the University of Edinburgh tracked electrical signals in the part of the brain linked to spatial awareness.
Sense of where we are
The study could help us understand how, if we know a room, we can go into it with our eyes shut and find our way around. This is closely related to the way we map out how to get from one place to another.
Brain’s electrical activity
Scientists found that brain cells, which code location through increases in electrical activity, do not do so by talking directly to each other. Instead, they can only send each other signals through cells that are known to reduce electrical activity. This is unexpected as cells that reduce electrical signalling are often thought to simply supress brain activity.
Rhythms of brain activity
The research also looked at electrical rhythms or waves of brain activity. Previous studies have found that spatial awareness is linked to not only the number and strength of electrical signals but also where on the electrical wave they occur.
The research shows that the indirect communication between nerve cells that are involved in spatial awareness also helps to explain how these electrical waves are generated. This finding is surprising because its suggests that the same cellular mechanisms allow our brains to work out our location and generate rhythmic waves of activity.
Spatial awareness and the brain’s electrical rhythms are known to be affected in conditions such as schizophrenia and Alzheimer’s disease. The scientists work could therefore help research in these areas.
Research
The study, funded by the Biotechnology and Biological Research Council, is published in the journal Neuron.
It looked at connections between nerve cells in the brain needed for spatial awareness in mice and then used computer modelling to recreate patterns of neural activity found in the brain.
Rhythms in brain activity are very mysterious and the research helps shed some light on this area as well as helping us understand how our brains code spatial information. It is particularly interesting that cells thought to encode location do not signal to each other directly but do so through intermediary cells. This is somewhat like members of a team not talking to each other, but instead sending messages via members of an opposing side. -Matt Nolan (Centre for Integrative Physiology)
As we age, it just may be the ability to filter and eliminate old information – rather than take in the new stuff – that makes it harder to learn, scientists report.
“When you are young, your brain is able to strengthen certain connections and weaken certain connections to make new memories,” said Dr. Joe Z. Tsien, neuroscientist at the Medical College of Georgia at Georgia Regents University and Co-Director of the GRU Brain & Behavior Discovery Institute.
It’s that critical weakening that appears hampered in the older brain, according to a study in the journal Scientific Reports.
The NMDA receptor in the brain’s hippocampus is like a switch for regulating learning and memory, working through subunits called NR2A and NR2B. NR2B is expressed in higher percentages in children, enabling neurons to talk a fraction of a second longer; make stronger bonds, called synapses; and optimize learning and memory. This formation of strong bonds is called long-term potentiation. The ratio shifts after puberty, so there is more NR2A and slightly reduced communication time between neurons.
When Tsien and his colleagues genetically modified mice that mimic the adult ratio – more NR2A, less NR2B – they were surprised to find the rodents were still good at making strong connections and short-term memories but had an impaired ability to weaken existing connections, called long-term depression, and to make new long-term memories as a result. It’s called information sculpting and adult ratios of NMDA receptor subunits don’t appear to be very good at it.
“If you only make synapses stronger and never get rid of the noise or less useful information then it’s a problem,” said Tsien, the study’s corresponding author. While each neuron averages 3,000 synapses, the relentless onslaught of information and experiences necessitates some selective whittling. Insufficient sculpting, at least in their mouse, meant a reduced ability to remember things short-term – like the ticket number at a fast-food restaurant – and long-term – like remembering a favorite menu item at that restaurant. Both are impacted in Alzheimer’s and age-related dementia.
All long-term depression was not lost in the mice, rather just response to the specific electrical stimulation levels that should induce weakening of the synapse. Tsien expected to find the opposite: that long-term potentiation was weak and so was the ability to learn and make new memories. “What is abnormal is the ability to weaken existing connectivity.”
Acknowledging the leap, this impaired ability could also help explain why adults can’t learn a new language without their old accent and why older people tend to be more stuck in their ways, the memory researcher said.
“We know we lose the ability to perfectly speak a foreign language if we learn than language after the onset of sexual maturity. I can learn English but my Chinese accent is very difficult to get rid of. The question is why,” Tsien said.
Tsien and his colleagues already have learned what happens when NR2B is overexpressed. He and East China Normal University researchers announced in 2009 the development of Hobbie-J, a smarter than average rat. A decade earlier, Tsien reported in the journal Nature the development of a smart mouse dubbed Doogie using the same techniques to over-express the NR2B gene in the hippocampus.
Doogie, Hobbie-J and their descendants have maintained superior memory as they age. Now Tsien is interested in following the NR2A over-expressing mouse to see what happens.
Scientists have long wondered how nerve cell activity in the brain’s hippocampus, the epicenter for learning and memory, is controlled — too much synaptic communication between neurons can trigger a seizure, and too little impairs information processing, promoting neurodegeneration. Researchers at Georgetown University Medical Center say they now have an answer. In the January 10 issue of Neuron, they report that synapses that link two different groups of nerve cells in the hippocampus serve as a kind of “volume control,” keeping neuronal activity throughout that region at a steady, optimal level.
"Think of these special synapses like the fingers of God and man touching in Michelangelo’s famous fresco in the Sistine Chapel," says the study’s senior investigator, Daniel Pak, PhD, an associate professor of pharmacology. "Now substitute the figures for two different groups of neurons that need to perform smoothly. The touching of the fingers, or synapses, controls activity levels of neurons within the hippocampus."
The hippocampus is a processing unit that receives input from the cortex and consolidates that information in terms of learning and memory. Neurons known as granule cells, located in the hippocampus’ dentate gyrus, receive transmissions from the cortex. Those granule cells then pass that information to the other set of neurons (those in the CA3 region of the hippocampus, in this study) via the synaptic fingers.
Those fingers dial up, or dial down, the volume of neurotransmission from the granule cells to the CA3 region to keep neurotransmission in the learning and memory areas of the hippocampus at an optimal flow — a concept known as homeostatic plasticity. “If granule cells try to transmit too much activity, we found, the synaptic junction tamps down the volume of transmission by weakening their connections, allowing the proper amount of information to travel to CA3 neurons,” says Pak. “If there is not enough activity being transmitted by the granule cells, the synapses become stronger, pumping up the volume to CA3 so that information flow remains constant.”
There are many such touching fingers in the hippocampus, connecting the so-called “mossy fibers” of the granule cells to neurons in the CA3 region. But importantly, not every one of the billions of neurons in the hippocampus needs to set its own level of transmission from one nerve cell to the other, says Pak.
To explain, he uses another analogy. “It had previously been thought that neurons act separately like cars, each working to keep their speed at a constant level even though signal traffic may be fast or slow. But we wondered how these neurons could process learning and memory information efficiently, while also regulating the speed by which they process and communicate that information.
"We believe, based on our study, that only the mossy fiber synapses on the CA3 neurons control the level of activity for the hippocampus — they are like the engine on a train that sets the speed for all the other cars, or neurons, attached to it," Pak says. "That frees up the other neurons to do the job they are tasked with doing — processing and encoding information in the forms of learning and memory."
Not only does the study offer a new model for how homeostatic plasticity in the hippocampus can co-exist with learning and memory, it also suggests a new therapeutic avenue to help patients with uncontrollable seizures, he says.
"The CA3 region is highly susceptible to seizures, so if we understand how homeostasis is maintained in these neurons, we could potentially manipulate the system. When there is an excessive level of CA3 neuronal activity in a patient, we could learn how to therapeutically turn it down."

It is said that classical music could make children more intelligent, but when you look at the scientific evidence, the picture is more mixed.
You have probably heard of the Mozart effect. It’s the idea that if children or even babies listen to music composed by Mozart they will become more intelligent. A quick internet search reveals plenty of products to assist you in the task. Whatever your age there are CDs and books to help you to harness the power of Mozart’s music, but when it comes to scientific evidence that it can make you more clever, the picture is more mixed.
The phrase “the Mozart effect” was coined in 1991, but it is a study described two years later in the journal Nature that sparked real media and public interest about the idea that listening to classical music somehow improves the brain. It is one of those ideas that feels plausible. Mozart was undoubtedly a genius himself, his music is complex and there is a hope that if we listen to enough of it, a little of that intelligence might rub off on us.
The idea took off, with thousands of parents playing Mozart to their children, and in 1998 Zell Miller, the Governor of the state of Georgia in the US, even asked for money to be set aside in the state budget so that every newborn baby could be sent a CD of classical music. It’s not just babies and children who were deliberately exposed to Mozart’s melodies. When Sergio Della Sala, the psychologist and author of the book Mind Myths, visited a mozzarella farm in Italy, the farmer proudly explained that the buffalos were played Mozart three times a day to help them to produce better milk.
I’ll leave the debate on the impact on milk yield to farmers, but what about the evidence that listening to Mozart makes people more intelligent? Exactly what was it was that the authors of the initial study discovered that took public imagination by storm?
When you look back at the original paper, the first surprise is that the authors from the University of California, Irvine are modest in their claims and don’t even use the “Mozart effect” phrase in the paper. The second surprise is that it wasn’t conducted on children at all: it was in fact conducted with those stalwarts of psychological studies – young adult students. Only 36 students took part. On three occasions they were given a series of mental tasks to complete, and before each task, they listened either to ten minutes of silence, ten minutes of a tape of relaxation instructions, or ten minutes of Mozart’s sonata for two pianos in D major (K448).
The students who listened to Mozart did better at tasks where they had to create shapes in their minds. For a short time the students were better at spatial tasks where they had to look at folded up pieces of paper with cuts in them and to predict how they would appear when unfolded. But unfortunately, as the authors make clear at the time, this effect lasts for about fifteen minutes. So it’s hardly going to bring you a lifetime of enhanced intelligence.
Brain arousal
Nevertheless, people began to theorise about why it was that Mozart’s music in particular could have this effect. Did the complexity of music cause patterns of cortical firing in the brain similar to those associated with solving spatial puzzles?
More research followed, and a meta-analysis of sixteen different studies confirmed that listening to music does lead to a temporary improvement in the ability to manipulate shapes mentally, but the benefits are short-lived and it doesn’t make us more intelligent.
Then it began to emerge that perhaps Mozart wasn’t so special after all. In 2010 a larger meta-analysis of a greater number of studies again found a positive effect, but that other kinds of music worked just as well. One study found that listening to Schubert was just as good, and so was hearing a passage read out aloud from a Stephen King novel. But only if you enjoyed it. So, perhaps enjoyment and engagement are key, rather than the exact notes you hear.
Although we tend to associate the Mozart effect with babies and small children, most of these studies were conducted on adults, whose brains are of course at a very different stage of development. But in 2006 a large study was conducted in Britain involving eight thousand children. They listened either to ten minutes of Mozart’s String Quintet in D Major, a discussion about the experiment or to a sequence of three pop songs: Blur’s “Country House,” “Return of the Mack,” by Mark Morrison and PJ and Duncan’s “Stepping Stone”. Once again music improved the ability to predict paper shapes, but this time it wasn’t a Mozart effect, but a Blur effect. The children who listened to Mozart did well, but with pop music they did even better, so prior preference could come into it.
Whatever your musical choice, it seems that all you need to do a bit better at predictive origami is some cognitive arousal. Your mind needs to get a little more active, it needs something to get it going and that’s going to be whichever kind of music appeals to you. In fact, it doesn’t have to be music. Anything that makes you more alert should work just as well – doing a few star jumps or drinking some coffee, for instance.
There is a way in which music can make a difference to your IQ, though. Unfortunately it requires a bit more effort than putting on a CD. Learning to play a musical instrument can have a beneficial effect on your brain. Jessica Grahn, a cognitive scientist at Western University in London, Ontario says that a year of piano lessons, combined with regular practice can increase IQ by as much as three points.
So listening to Mozart won’t do you or your children any harm and could be the start of a life-long love of classical music. But unless you and your family have some urgent imaginary origami to do, the chances are that sticking on a sonata is not going to make you better at anything.
An experimental oral drug given to mice after a spinal cord injury was effective at improving limb movement after the injury, a new study shows.
The compound efficiently crossed the blood-brain barrier, did not increase pain and showed no toxic effects to the animals.
“This is a first to have a drug that can be taken orally to produce functional improvement with no toxicity in a rodent model,” said Sung Ok Yoon, associate professor of molecular & cellular biochemistry at Ohio State University and lead author of the study. “So far, in the spinal cord injury field with rodent models, effective treatments have included more than one therapy, often involving invasive means. Here, with a single agent, we were able to obtain functional improvement.”
The small molecule in this study was tested for its ability to prevent the death of cells called oligodendrocytes. These cells surround and protect axons, long projections of a nerve cell, by wrapping them in myelin. In addition to functioning as axon insulation, myelin allows for the rapid transmission of signals between nerve cells.
The drug preserved oligodendrocytes by inhibiting the activation of a protein called p75. Yoon’s lab previously discovered that p75 is linked to the death of these specialized cells after a spinal cord injury. When they die, axons that are supported by them degenerate.
“Because we know that oligodendrocytes continue to die for a long period of time after an injury, we took the approach that if we could put a brake on that cell death, we could prevent continued degeneration of axons,” she said. “Many researchers in the field are focusing on regeneration of neurons, but we specifically targeted a different type of cells because it allows a relatively long therapeutic window.”
An additional benefit of targeting oligodendrocytes is that it can amplify the therapeutic effect because a single oligodendrocyte myelinates multiple axons.
A current acute treatment for humans, methylprednisolone, must be administered within eight but not more than 24 hours after the injury to be effective at all. An estimated 1.3 million people in the United States are living with spinal cord injuries, experiencing paralysis and complications that include bladder, bowel and sexual dysfunction and chronic pain.
The experimental drug, called LM11A-31, was developed by study co-author Frank Longo, professor of neurology and neurological sciences at Stanford University. The drug is the first to be developed with a specific target, p75, as a potential therapy for spinal cord injury.
The research is published in the Jan. 9, 2013, issue of The Journal of Neuroscience.
Researchers gave three different oral doses of LM11A-31, as well as a placebo, to different groups of mice beginning four hours after injury and then twice daily for a 42-day experimental period. The scientists analyzed the compound’s effectiveness at improving limb movement and preventing myelin loss.
The spinal cord injuries in mice mimicked those caused in humans by the application of extensive force and pressure, resulting in loss of hind-limb and bladder function andexperimentally calibrated baseline difficulty in walking and swimming.
The researchers determined that the mice did not experience more pain than the placebo group at all the doses tested, suggesting that LM11A-31 does not worsen nerve pain after spinal cord injury.
Analysis showed that the extent of myelin sparing was dependent on the dose of the drug. Each dose – 10, 25 or 100 milligrams per kilogram of body weight – led to increasing myelin sparing, with the highest dose demonstrating the greatest effect.
The injury in the animals caused a loss of about 75 percent of myelinated axons in the lesion area in the placebo group. This loss was reduced so that myelinated axons reached more than half of the normal levels with LM11A-31 at 100 mg/kg. That was correlated with about a 50 percent increase in surviving oligodendrotcytes compared to those in the placebo group, Yoon said.
In behavior tests, only the highest dose of the compound led to improvements in motor function. Mice were tested in both weight-bearing and non-weight-bearing activities over the 42 days to evaluate their functional recovery.
Mice receiving the highest dose could walk with well-coordinated steps. In swimming tests, scientists saw similar improvements, with mice receiving the highest dose most able to coordinate hind-limb crisscross movement. The other treatment groups exhibited difficulty in walking and swimming.
Yoon said the findings may suggest that myelin sparing needs to reach a threshold of roughly 50 percent of normal levels before motor function improvements become measurable.
“The cellular analysis of the myelin profile detects small changes. Behavior is more complex, and we don’t think functional behavior necessarily improves in a linear fashion,” she said. “Still, these results clearly show that this is the first oral drug in spinal cord injury that works alone to improve function.”
University of Florida researchers and colleagues have identified a protein that, when absent, helps the body burn fat and prevents insulin resistance and obesity. The findings from the National Institutes of Health-funded study were published online ahead of print Sunday, Jan. 6, in the journal Nature Medicine.
The discovery could aid development of drugs that not only prevent obesity, but also spur weight loss in people who are already overweight, said Dr. Stephen Hsu, one of the study’s corresponding authors and a principal investigator with the UF Sid Martin Biotechnology Development Institute.
One-third of adults and about 17 percent of children in the United States are obese, according to the Centers for Disease Control and Prevention. Although unrelated studies have shown that lifestyle changes such as choosing healthy food over junk food and increasing exercise can help reduce obesity, people are often unable to maintain these changes over time, Hsu said.
“The problem is when these studies end and the people go off the protocols, they almost always return to old habits and end up eating the same processed foods they did before and gain back the weight they lost during the study,” he said. Developing drugs that target the protein, called TRIP-Br2, and mimic its absence may allow for the prevention of obesity without relying solely on lifestyle modifications, Hsu said.
First identified by Hsu, TRIP-Br2 helps regulate how fat is stored in and released from cells. To understand its role, the researchers compared mice that lacked the gene responsible for production of the protein, with normal mice that had the gene.
They quickly discovered that mice missing the TRIP-Br2 gene did not gain weight no matter what they ate — even when placed on a high-fat diet — and were otherwise normal and healthy. On the other hand, the mice that still made TRIP-Br2 gained weight and developed associated problems such as insulin resistance, type 2 diabetes and high cholesterol when placed on a high-fat diet. The normal and fat-resistant mice ate the same amount of food, ruling out differences in food intake as a reason why the mice lacking TRIP-Br2 were leaner.
“We had to explain why the animals eating so much fat were remaining lean and not getting high cholesterol. Where was this fat going?” Hsu said. “It turns out this protein is a master regulator. It coordinates expression of a lot of genes and controls the release of the fuel form of fat and how it is metabolized.”
When functioning normally, TRIP-Br2 restricts the amount of fat that cells burn as energy. But when TRIP-Br2 is absent, a fat-burning fury seems to occur in fat cells. Although other proteins have been linked to the storage and release of fat in cells, TRIP-Br2 is unique in that it regulates how cells burn fat in a few different ways, Hsu said. When TRIP-Br2 is absent, fat cells dramatically increase the release of free fatty acids and also burn fat to produce the molecular fuel called ATP that powers mitochondria — the cell’s energy source. In addition, cells free from the influence of TRIP-Br2 start using free fatty acids to generate thermal energy, which protects the body from exposure to cold.
“TRIP-Br2 is important for the accumulation of fat,” said Dr. Rohit N. Kulkarni, also a senior author of the paper and an associate professor of medicine at Harvard Medical School and the Joslin Diabetes Center. “When an animal lacks TRIP-Br2, it can’t accumulate fat.”
Because the studies were done mostly in mice, additional studies are still needed to see if the findings translate to humans.
“We are very optimistic about the translational promise of our findings because we showed that only human subjects who had the kind of fat (visceral) that becomes insulin-resistant also had high protein levels of TRIP-Br2,” Hsu said.
“Imagine you are able to develop drugs that pharmacologically mimic the complete absence of TRIP-Br2,” Hsu said. “If a patient started off fat, he or she would burn the weight off. If people are at risk of obesity and its associated conditions, such as type 2 diabetes, it would help keep them lean regardless of how much fat they ate. That is the ideal anti-obesity drug, one that prevents obesity and helps people burn off excess weight.”
Scientists have wrestled to understand why Huntington’s disease, which is caused by a single gene mutation, can produce such variable symptoms. An authoritative review by a group of leading experts summarizes the progress relating cell loss in the striatum and cerebral cortex to symptom profile in Huntington’s disease, suggesting a possible direction for developing targeted therapies. The article is published in the latest issue of the Journal of Huntington’s Disease.
Huntington’s disease (HD) is an inherited progressive neurological disorder for which there is presently no cure. It is caused by a dominant mutation in the HD gene leading to expression of mutant huntingtin (HTT) protein. Expression of mutant HTT causes subtle changes in cellular functions, which ultimately results in jerking, uncontrollable movements, progressive psychiatric difficulties, and loss of mental abilities.
Although it is caused by a single gene, there are major variations in the symptoms of HD. The pattern of symptoms shown by each individual during the course of the disease can differ considerably and present as varying degrees of movement disturbances, cognitive decline, and mood and behavioral changes. Disease duration is typically between ten and twenty years.
Recent investigations have focused on what the presence of the defective gene does to various structures in the brain and understanding the relationship between changes in the brain and the variability in symptom profiles in Huntington’s disease.
Analyses of post-mortem human HD tissue suggest that the variation in clinical symptoms in HD is strongly associated with the variable pattern of neurodegeneration in two major regions of the brain, the striatum and the cerebral cortex. The neurodegeneration of the striatum generally follows an ordered and topographical distribution, but comparison of post-mortem human HD tissue and in vivo neuroimaging techniques reveal that the disease produces a striking bilateral atrophy of the striatum, which in these recent studies has been found to be highly variable.
“What is especially interesting is that recent findings suggest that the pattern of striatal cell death shows regional differences between cases in the functionally and neurochemically distinct striosomal and matrix compartments of the striatum which correspond with symptom variation,” says author Richard L.M. Faull, MB, ChB, PhD, DSc, Director of the Centre for Brain Research, University of Auckland, New Zealand.
“Our own recent detailed quantitative study using stereological cell counting in the post-mortem human HD cortex has complemented and expanded the neuroimaging studies by providing a cortical cellular basis of symptom heterogeneity in HD,” continues Dr Faull. “In particular, HD cases which were dominated by motor dysfunction showed a major total cell loss (28% loss) in the primary motor cortex but no cell loss in the limbic cingulate cortex, whereas cases where mood symptoms predominated showed a total of 54% neuronal loss in the limbic cingulate cortex but no cell loss in the motor cortex. This suggests that the variable neuronal loss and alterations in the circuitry of the primary motor cortex and anterior cingulate cortex associated with the variable compartmental pattern of cell degeneration in the striatum contribute to the differential impairments of motor and mood functions in HD.”
The authors note that there are still questions to be answered in the field of HD pathology, such as, how and when pathological neuronal loss occurs; whether the progressive loss of neurons in the striatum is the primary process or is consequential to cortical cell dysfunction; and how these changes relate to symptom profiles.
“What is clear however is that the diverse symptoms of HD patients appear to relate to the heterogeneity of cell loss in both the striatum and cerebral cortex,” the authors conclude. “While there is currently no cure, this contemporary evidence suggests that possible genetic therapies aimed at HD gene silencing should be directed towards intervention at both the cerebral cortex and the striatum in the human brain. This poses challenging problems requiring the application of gene silencing therapies to quite widespread regions of the forebrain which may be assisted via CSF delivery systems using gene suppression agents that cross the CSF/brain barrier.”