More study is needed, but isoflurane might provide alternative to electroconvulsive therapy
Although electroconvulsive therapy (ECT) has long been considered the most effective treatment of medication-resistant depression, millions of people who could benefit don’t take advantage of it because of the treatment’s side effects and public misperception of the procedure.
If the results of a campus-wide collaboration of University of Utah researchers are borne out by larger studies and trials, patients with refractory depression might one day have an alternative that is as effective as ECT but without the side effects – the surgical anesthetic drug isoflurane.
“We need to expand our research into a larger, multicenter trial, but if the results of our pilot study pan out, it would change the face of treating depression,” says Howard R. Weeks, M.D., assistant professor of psychiatry and first author on a study published July 26, 2013, in PLOS ONE online.
Also known as shock therapy, ECT is effective in 55 percent to 90 percent of depression cases, with significant reductions in symptoms typically occurring within two to four weeks. When medications work, they can take six to eight weeks to become effective. But ECT is associated with side effects including amnesia, concentration and attention problems, and other cognitive issues. Many people also mistakenly believe ECT is painful and causes brain damage, which has given the treatment a social stigma that makes millions of patients reluctant to have it. Isoflurane potentially offers an alternative to ECT that could help many of those people, according to Weeks and his colleagues from eight University of Utah departments and programs.
In a pilot study with 20 patients who received ECT treatments compared to eight patients who received the isoflurane treatments, the researchers found that both therapies provided significant reduction in symptoms of depression. Immediately following the treatments, ECT patients showed declines in areas of memory, verbal fluency, and processing speed. Most of these ECT-related deficits resolved by four weeks. However, autobiographical memory, or recall of personal life events, remained below pretreatment levels for ECT patients four weeks after the treatment. In contrast, the patients treated with isoflurane showed no real impairment but instead had greater improvements in cognitive testing than ECT patients both immediately and four weeks after the treatments.
In the mid-1980’s, researchers in Europe studied isoflurane as a potential depression therapy. Later studies by other scientists failed to confirm the results of the original work and isoflurane research fell out of favor. But these later studies didn’t adhere to the first study’s protocol regarding type of anesthetic, dosing size and number of treatments, according to Weeks, and he believes that’s why isoflurane’s antidepressant effects weren’t confirmed in subsequent trials. For their research, Weeks and his University of Utah colleagues followed the original study’s protocol.
“Our data reconfirm that isoflurane had an antidepressant effect approaching ECT with less adverse neurocognitive effects, and reinforce the need for a larger clinical trial,” the researchers wrote.
Researchers don’t know what produces the relief of depression symptoms from ECT or isoflurane. Weeks believes further research might identify a molecular pathway that both therapies target and is responsible for the improvement in depression. One common effect of both ECT and isoflurane treatments is a brief state of low electrical activity in which the brain becomes unusually quiet. ECT induces a seizure to reach that state, but isoflurane does not. After inhaling the anesthesia, patients are “under” for about 45 minutes, with 15 minutes of that time being a deep state of unconsciousness, according to Weeks. This period of electrical rest for the brain may be a potential explanation for why ECT and isoflurane improve depression.
If isoflurane proves to be a viable alternative to ECT, a device invented by three University of Utah anesthesiology faculty members can make the anesthetic an even more attractive therapy. The Aneclear™ device (Anecare, Salt Lake City, UT) invented by Dwayne R. Westenskow, Ph.D., Derek J. Sakata, M.D., and Joseph A. Orr, Ph.D., from the University of Utah Department of Anesthesiology, uses hyperventilation and allows patients to rebreathe their own carbon dioxide (C02). Hyperventilation removes anesthesia from the lungs and C02 encourages blood flow to the brain, which encourages quicker removal of anesthetic. The Aneclear™ also minimizes or even eliminates vomiting, nausea, and extreme fatigue that some patients experience from anesthesia.
“With the Aneclear™, we can wake people up from the anesthesia much quicker,” Weeks says. “This makes the treatment a potentially viable clinical treatment by reducing the time required in an operating room.”
Weeks and his co-researchers now are looking for grants to fund a larger study that will include several U.S. centers.
Statins, a class of cholesterol-lowering drugs found in millions of medicine cabinets, may help treat Rett Syndrome, according to a study published today in Nature Genetics. The Rett Syndrome Research Trust (RSRT) funded this work with generous support from the Rett Syndrome Research Trust UK and Rett Syndrome Research & Treatment Foundation.
Rett Syndrome is a neurological disorder that affects girls. A seemingly typical toddler begins to miss developmental milestones. A regression follows as young girls lose speech, mobility, and hand use. Many girls have seizures, orthopedic and severe digestive problems, as well as breathing and other autonomic impairments. Most live into adulthood and require total, round-the-clock care. Rett Syndrome affects about 1 in 10,000 girls born in the U.S. each year.
The new study screened for randomly induced mutations in genes that modify the effect of the Rett gene, MECP2 (methyl-CpG-binding protein 2), in a mouse model. MECP2 turns other genes on or off by disrupting chromatin, the DNA-protein mix that makes up chromosomes.
The challenge of treating Rett Syndrome is what drove senior author Monica Justice, Ph.D., Professor in the Departments of Molecular and Human Genetics and Molecular Physiology and Biophysics at the Baylor College of Medicine, to look beyond MECP2, hoping to find new drug targets that might improve symptoms or even reverse the course of the disease. In 2007, Adrian Bird, Ph.D., Buchanan Professor of Genetics at the Wellcome Trust Centre for Cell Biology at the University of Edinburgh, showed that symptoms in mice are reversible regardless of the age of the animal.
Exploring cholesterol metabolism in neurological diseases is an emerging area, with statin drugs being tested in fragile X syndrome, neurofibromatosis, amyotrophic lateral sclerosis, and other conditions. But it hadn’t been on the radar for Rett Syndrome. “Our screen was to see if we could suppress the symptoms to reveal alternative pathways to treatment. The cholesterol hit was a big one,” Dr. Justice said. The screen was unbiased – the researchers were looking for any gene that would interact with MECP2 in a useful way, rather than employing a candidate gene approach based on hypotheses.
Dr. Justice and her team injected healthy male mice with a chemical called ENU (a form of nitrosourea) that mutates sperm stem cells randomly, then mated the males to Rett females. The researchers then looked for offspring that should have developed the syndrome (according to their genes), but didn’t (according to their good health).
Key to the investigation was being able to tell sick mice from healthy ones. Fortunately this turned out to be easy. The rescued mice didn’t develop the characteristic tremor, trouble breathing, poor limb-clasping, and general scruffiness of their affected cage-mates. They moved around more, performed better on mobility tests and lived longer.
Once the rescued mice had been identified the random gene mutations from the 24,000 genes that make up the mouse genome had to be pinpointed. “With next generation DNA sequencing, we are finding mutations so easily and quickly. It’s amazing,” said Dr. Justice, compared to the old days of setting up many more generations of crosses to narrow down a part of the genome harboring a gene of interest.
“We are only15% of the way through the screen, and so far we have identified 5 modifiers. The most drug-targetable is a gene called squalene epoxidase (Sqle), which encodes a rate-limiting enzyme in the cholesterol biosynthetic pathway. Frankly, this discovery was a surprise,” Dr. Justice said. It’s important to note that this enzyme is different from the rate-limiting enzyme (HMG CoA reductase) influenced by statin drugs.
Cholesterol is of course best known for its negative effects on the cardiovascular system, but the lipid has multiple roles in the brain: it helps to form the myelin insulation on neurons and takes part in membrane trafficking, dendrite remodeling, synapse formation, signal transduction, and neuropeptide synthesis.
The next step was to test several statins (fluvastatin and lovastatin) on Rett mice. Like the Sqle mutation, the drugs improved symptoms. Treated mice performed well on mobility and gross motor tests, had better overall health scores and lived longer. The drugs didn’t, however, improve breathing.
“When we saw the mutation in a cholesterol pathway enzyme, we immediately thought of statin drugs. Now that our eyes have opened to what is going on, we have a multitude of drugs that modulate lipid metabolism that we can try in addition to statins,” said first author Christie Buchovecky, graduate student in the Justice lab.
With additional RSRT funding, pediatric neurologist and Director of the Tri-State Rett Syndrome Center in the Bronx Dr. Sasha Djukic undertook a detailed review of lipid data in girls with Rett Syndrome. She found that a subset have elevated cholesterol levels which normalize as they age. These data are not included in the Nature Genetics publication but will be part of a subsequent paper. Dr. Djukic is now planning a clinical trial.
Drs. Justice and Djukic caution that carefully designed and rigorously executed clinical trials are essential to test whether what works in mice will also work in girls with Rett Syndrome. Clinical trials should also determine the most effective timeframe for treatment, ways to identify which girls are most likely to respond, (for example, will statins help girls with Rett who do not have elevated cholesterol?), which drugs to trial and what dosages are effective but not toxic.
“Although statins are blockbuster drugs taken by a large percentage of the population they are not without risks and side-effects, and data on statins in the general pediatric population are quite limited. One of the key objectives of the clinical trial will be to determine correct dosages for Rett symptoms. It’s important to note that the mice in Dr. Justice’s study received very low doses of statins. I urge parents to resist any temptation to medicate their children with off-label statins,” cautions Dr Djukic. “The only way to know if this class of drugs will be efficacious in Rett is through controlled trials. Working with Dr. Justice and RSRT we will be bringing families additional information as soon as possible.”
“The biggest finding is the discovery that this pathway is so important to the pathology of the disorder; it suggests new directions for trying to learn more about Rett Syndrome,” Dr. Justice explains. “Emerging evidence from both mice and humans suggest that Rett Syndrome may have a component of disease that is metabolic. Certainly, this study will further clarify our data, and may suggest avenues for treatment that were previously unexplored.”
Neuroscientist Sarah Laszlo wants to understand what’s going on in children’s brains when they’re reading. Her research may untangle some of the mysteries surrounding dyslexia and lead to new methods of treating America’s most common learning disorder.

“The brain can reveal things that aren’t necessarily visible on the surface,” she says. “It can tell you things about what’s going wrong that you can’t find out by giving a kid a test or asking him to read out loud.”
Laszlo, who joined Binghamton’s psychology department in 2011, recently received a five-year, $400,763grant from the National Science Foundation’s Early Career Development (CAREER) Program, the agency’s most prestigious award for young researchers. The funding will enable her to conduct a five-year brain activity study of 150 children with and without dyslexia.
Rather than lumping all children with dyslexia into one group, as many previous brain-imaging studies have done, Laszlo’s project will help to establish types and degrees of the disorder.
Her lab uses electroencephalography, or EEG, as a non-invasive way to measure the electrical signals sent between brain cells when they’re communicating with each other. Study participants — kids in kindergarten through fourth grade — wear a cap outfitted with special sensors while playing a computerized reading game.
These scans produce massive amounts of data: The cap’s 10 sensors collect readings 500 times per second for 45 minutes. That’s one reason that brain activity studies are expensive and time-consuming. It’s also the reason that a study of just 150 children is the largest study of its kind.
Kara Federmeier, a professor of psychology at the University of Illinois, says it’s not just the scale of the study that’s impressive; it’s also the project’s duration. “Sarah will be able to assess how the brain transitions from immature reading processes to mature reading processes,” Federmeier says. “Her project promises to provide important, novel data that may be critical for informing educational practices about teaching reading and clinical practices for assessing reading-related difficulties.”
Why study this disorder in particular? Laszlo notes that there are significant, sometimes lifelong consequences of growing up with dyslexia. Many dyslexic children don’t do as well in school as they might otherwise, which limits their career opportunities. Some also encounter social problems. “This has the potential to help a lot of people,” she says.
Laszlo hopes to identify the brain signatures of people with dyslexia and have a clear idea of how to help them. “Once you understand what’s going on in the brain,” she says, “you can do a better job of designing treatments.”
Today, the best-case scenario is that children with dyslexia receive interventions that enable them to get up to speed on reading aloud. But they may continue to lag behind their peers when it comes to comprehension, fluency and speed. “The treatments we have now don’t always fix the underlying problem,” Laszlo says. “They just put a Band-Aid on it. And when you go to do more complicated things, like reading larger passages, the Band-Aid doesn’t help.”
How to Participate
Participants in Sarah Laszlo’s Reading Brain Project play a computerized reading game while researchers measure their brain activity. Children in kindergarten through fourth grade are eligible for the Binghamton University study and will receive $50 or an equivalent gift for their time. To sign up your child, call 607-269-7271 or e-mail readingbrain@binghamton.edu. For more details, visit www.binghamton.edu/reading-brain.
Researchers at McMaster University have discovered a solution to a long-standing medical mystery in Huntington’s disease (HD).
HD is a brain disease that can affect 1 in about 7,000 people in mid-life, causing an increasing loss of brain cells at the centre of the brain. HD researchers have known what the exact DNA change is that causes Huntington’s disease since 1993, but what is typically seen in patients does not lead to disease in animal models. This has made drug discovery difficult.
In this week’s issue of the science journal, the Proceedings of the National Academy of Sciences, professor Ray Truant’s laboratory at McMaster University’s Department of Biochemistry and Biomedical Sciences of the Michael G. DeGroote School of Medicine reveal how they developed a way to measure the shape of the huntingtin protein, inside of cell, while still alive. They then discovered was that the mutant huntingtin protein that causes disease was changing shape. This is the first time anyone has been able to see differences in normal and disease huntingtin with DNA defects that are typical in HD patients.
They went on to show that they can measure this shape change in cells derived from the skin cells of living Huntington’s disease patients.
“With mouse models, we know that some drugs can stop, and even reverse Huntington’s disease, but now we know exactly why,” said Truant. “The huntingtin protein has to take on a precise shape, in order to do its job in the cell. In Huntington’s disease, the right parts of the protein can’t line up to work properly. It’s like trying to use a paperclip after someone has bent it out of shape.”
The research also shows that the shape of disease huntingtin protein can be changed back to normal with chemicals that are in development as drugs for HD. “We can refold the paper clip,” said Truant.
The methods they developed have been scaled up and used for large scale robotic drug screening, which is now ongoing with a pharmaceutical company. They are looking for drugs that can enter the brain more easily. Furthermore, they can tell if the shape of huntingtin has been corrected in patients undergoing drug trials, without relying on years to know if the HD is affected yet.
This research was a concerted effort from many sources: funding from the Canadian Foundation Institute and the Ontario Innovation Trust for an $11M microscopy centre at McMaster in 2006, ongoing support from the Canadian Institutes of Health Research, and important funding from the Toronto-based Krembil Foundation. The project was initiated with charity grant support from the Huntington Society of Canada, which allowed them to show this method was promising for further support.
The last piece of the puzzle was from the Huntington’s disease patient community, with skin cell donations from living patients and unaffected spouses that allowed the team to look at real human disease.
More information about Huntington’s Disease can be found at HDBuzz.net, a global website in eleven languages that takes primary published research articles and explains them to plain language to more than 300,000 non-scientists per month.
There are eight other diseases that have a similar DNA defects as Huntington’s disease, Truant’s group is now using similar tools to develop assays to measure shape changes in those diseases, to see if this shapeshifting is common in other diseases.
It happens to all of us at least once each winter in Montreal. You’re walking on the sidewalk and before you know it you are slipping on a patch of ice hidden under a dusting of snow. Sometimes you fall. Surprisingly often you manage to recover your balance and walk away unscathed. McGill researchers now understand what’s going on in the brain when you manage to recover your balance in these situations. And it is not just a matter of good luck.
Prof. Kathleen Cullen and her PhD student Jess Brooks of the Dept of Physiology have been able to identify a distinct and surprisingly small cluster of cells deep within the brain that react within milliseconds to readjust our movements when something unexpected happens, whether it is slipping on ice or hitting a rock when skiing. What is astounding is that each individual neuron in this tiny region that is smaller than a pin’s head displays the ability to predict and selectively respond to unexpected motion.
This finding both overturns current theories about how we learn to maintain our balance as we move through the world, and also has significant implications for understanding the neural basis of motion sickness.
Scientists have theorized for some time that we fine-tune our movements and maintain our balance, thanks to a neural library of expected motions that we gain through “sensory conflicts” and errors. “Sensory conflicts” occur when there is a mismatch between what we think will happen as we move through the world and the sometimes contradictory information that our senses provide to us about our movements.
This kind of “sensory conflict” may occur when our bodies detect motion that our eyes cannot see (such as during plane, ocean or car travel), or when our eyes perceive motion that our bodies cannot detect (such as during an IMAX film, when the camera swoops at high speed over the edge of steep cliffs and deep into gorges and valleys while our bodies remain sitting still). These “sensory conflicts” are also responsible for the feelings of vertigo and nausea that are associated with motion sickness.
But while the areas of the brain involved in estimating spatial orientation have been identified for some time, until now, no one has been able to either show that distinct neurons signaling “sensory conflicts” existed, nor demonstrate exactly how they work. “We’ve known for some time that the cerebellum is the part of the brain that takes in sensory information and then causes us to move or react in appropriate ways,” says Prof. Cullen. “But what’s really exciting is that for the first time we show very clearly how the cerebellum selectively encodes unexpected motion, to then send our body messages that help us maintain our balance. That it is such a very exact neural calculation is exciting and unexpected.”
By demonstrating that these “sensory conflict” neurons both exist and function by making choices “on the fly” about which sensory information to respond to, Cullen and her team have made a significant advance in our understanding of how the brain works to keep our bodies in balance as we move about.
The research was done by recording brain activity in macaque monkeys who were engaged in performing specific tasks while at the same time being unexpectedly moved around by flight-simulator style equipment.
A small percentage of people diagnosed with a mysterious neurological condition may only experience psychiatric changes - such as delusional thinking, hallucinations, and aggressive behavior - according to a new study by researchers in the Perelman School of Medicine at the University of Pennsylvania. In addition, people who had previously been diagnosed with this disease, called anti-NMDA receptor (anti-NMDAR) encephalitis, had relapses that only involved psychiatric behavior. In an article published Online First in JAMA Neurology, researchers suggest that, while isolated psychiatric episodes are rare in anti-NMDAR encephalitis cases, abnormal test findings or subtle neurological symptoms should prompt screening for the condition, as it is treatable with immunotherapies.
Within a large group of 571 patients with confirmed Anti-NMDAR Encephalitis, only 23 patients (4 percent) had isolated psychiatric episodes. Of the 23, 5 patients experienced the onset of behavior changes as their only symptoms, without neurological changes, while 18 patients had psychiatric symptoms emerge at the outset of a relapse of Anti-NMDAR Encephalitis in which no neurological changes were identified. After being treated for the condition, 83 percent of these patients recovered substantially or completely.
"While many patients with Anti-NMDAR Encephalitis present with isolated psychiatric symptoms, most of these patients subsequently develop, in a matter of days, additional neurological symptoms which help to make the diagnosis of the disease. In the current study, we find out that a small percentage of patients do not develop neurological symptoms, or sometimes these are very subtle and transitory. Studies using brain MRI and analysis of the cerebrospinal fluid may help to demonstrate signs of inflammation," said Josep Dalmau, MD, PhD, adjunct professor of Neurology. "For patients who have been previously diagnosed with Anti-NMDAR Encephalitis and are in remission, any behavior change may present a relapse and should be tested quickly and treated aggressively."
Anti-NMDAR Encephalitis is one of the most common forms of autoimmune encephalitis, and symptoms can include psychiatric symptoms, memory issues, speech disorders, seizures, involuntary movements, and loss of consciousness. In an earlier Penn Medicine study, 38 percent of all patients (and 46 percent of females with the condition) were found to have a tumor, most commonly it was an ovarian tumor. When correctly diagnosed and treated early, Anti-NMDAR Encephalitis can be effectively treated.
"For patients with new psychotic symptoms that are evaluated in centers where an MRI, EEG or spinal fluid test may not have been administered, there is a chance that Anti-NMDAR Encephalitis may be missed,” said lead author Matthew Kayser, MD, PhD, postdoctoral fellow and attending physician in Psychiatry at Penn. "However, the likelihood of pure or isolated new-onset psychosis to be anti-NMDAR encephalitis gradually decreases if no other symptoms emerge during the first 4 weeks of psychosis."
Anti-NMDAR Encephalitis was first characterized by Penn’s Josep Dalmau, MD, PhD, adjunct professor of Neurology, and David R. Lynch, MD, PhD, associate professor of Neurology and Pediatrics, in 2007. One year later, the same investigators, in collaboration with Rita Balice-Gordon, PhD, professor of Neuroscience, characterized the main syndrome and provided preliminary evidence that the antibodies have a pathogenic effect on the NR1 subunit of the NMDA receptor in the Lancet Neurology in December 2008. The disease can be diagnosed using a test developed at the University of Pennsylvania and currently available worldwide. With appropriate treatment, approximately 81 percent of patients significantly improve and, with a recovery process that takes an average of 2 years, can fully recover.
Princeton University researchers have created “souped up” versions of the calcium-sensitive proteins that for the past decade or so have given scientists an unparalleled view and understanding of brain-cell communication.

Reported July 18 in the journal Nature Communications, the enhanced proteins developed at Princeton respond more quickly to changes in neuron activity, and can be customized to react to different, faster rates of neuron activity. Together, these characteristics would give scientists a more precise and comprehensive view of neuron activity.
The researchers sought to improve the function of proteins known as green fluorescent protein/calmodulin protein (GCaMP) sensors, an amalgam of various natural proteins that are a popular form of sensor proteins known as genetically encoded calcium indicators, or GECIs. Once introduced into the brain via the bloodstream, GCaMPs react to the various calcium ions involved in cell activity by glowing fluorescent green. Scientists use this fluorescence to trace the path of neural signals throughout the brain as they happen.
GCaMPs and other GECIs have been invaluable to neuroscience, said corresponding author Samuel Wang, a Princeton associate professor of molecular biology and the Princeton Neuroscience Institute. Scientists have used the sensors to observe brain signals in real time, and to delve into previously obscure neural networks such as those in the cerebellum. GECIs are necessary for the BRAIN Initiative President Barack Obama announced in April, Wang said. The estimated $3 billion project to map the activity of every neuron in the human brain cannot be done with traditional methods, such as probes that attach to the surface of the brain. “There is no possible way to complete that project with electrodes, so you have to do it with other tools — GECIs are those tools,” he said.
Despite their value, however, the proteins are still limited when it comes to keeping up with the fast-paced, high-voltage ways of brain cells, and various research groups have attempted to address these limitations over the years, Wang said.
“GCaMPs have made significant contributions to neuroscience so far, but there have been some limits and researchers are running up against those limits,” Wang said.
One shortcoming is that GCaMPs are about one-tenth of a second slower than neurons, which can fire hundreds of times per second, Wang said. The proteins activate after neural signals begin, and mark the end of a signal when brain cells have (by neuronal terms) long since moved on to something else, Wang said. A second current limitation is that GCaMPs can only bind to four calcium ions at a time. Higher rates of cell activity cannot be fully explored because GCaMPs fill up quickly on the accompanying rush of calcium.
The Princeton GCaMPs respond more quickly to changes in calcium so that changes in neural activity are seen more immediately, Wang said. By making the sensors a bit more sensitive and fragile — the proteins bond more quickly with calcium and come apart more readily to stop glowing when calcium is removed — the researchers whittled down the roughly 20 millisecond response time of existing GCaMPs to about 10 milliseconds, Wang said.
The researchers also tweaked certain GCaMPs to be sensitive to different types of calcium ion concentrations, meaning that high rates of neural activity can be better explored. “Each probe is sensitive to one range or another, but when we put them together they make a nice choir,” Wang said.
The researchers’ work also revealed the location of a “bottleneck” in GCaMPs that occurs when calcium concentration is high, which poses a third limitation of the existing sensors, Wang said. “Now that we know where that bottle neck is, we think we can design the next generation of proteins to get around it,” Wang said. “We think if we open up that bottleneck, we can get a probe that responds to neuronal signals in one millisecond.”
The faster protein that the Princeton researchers developed could pair with work in other laboratories to improve other areas of GCaMP function, Wang said. For instance, a research group out of the Howard Hughes Medical Institute reported in Nature July 17 that it developed a GCaMP with a brighter fluorescence. Such improvements on existing sensors gradually open up more of the brain to exploration and understanding, said Wang, adding that the Princeton researchers will soon introduce their sensor into fly and mammalian brains.
“At some level, what we’ve done is like taking apart an engine, lubing up the parts and putting it back together. We took what was the best version of the protein at the time and made changes to the letter code of the protein,” Wang said. “We want to watch the whole symphony of thousands of neurons do their thing, and we think this variant of GCaMPs will help us do that better than anyone else has.”
A class of drug, called ACE inhibitors, which are used to lower blood pressure, slow the rate of cognitive decline typical of dementia, suggests research published in the online journal BMJ Open.
Furthermore, these drugs may even boost brain power, the research indicates.
The researchers compared the rates of cognitive decline in 361 patients who had either been diagnosed with Alzheimer’s disease, vascular dementia, or a mix of both.
Eighty five of the patients were already taking ACE inhibitors; the rest were not.
The researchers also assessed the impact of ACE inhibitors on the brain power of 30 patients newly prescribed these drugs, during their first six months of treatment. The average age of all the participants was 77.
Between 1999 and 2010, the cognitive decline of each patient was assessed using either the Standardised Mini Mental State Examination (SMMSE) or the Quick Mild Cognitive Impairment (Qmci) screen on two separate occasions, six months apart.
Compared with those not taking ACE inhibitors, those on these drugs experienced marginally slower rates of cognitive decline.
In those whose brain power had been assessed by Qmci, which is a more sensitive screen than the SMMSE, the difference was small, but significant.
And the brain power of those patients newly prescribed ACE inhibitors actually improved over the six month period, compared with those already taking them, and those not taking them at all.
This might be because these patients stuck to their medication regimen better, or it might be a by-product of better blood pressure control, or improved blood flow to the brain, suggest the authors.
But it is the first time that there has been any evidence to suggest that blood pressure lowering drugs may not only halt cognitive decline, but may actually improve brain power.
“This [study] supports the growing body of evidence for the use of ACE inhibitors and other [blood pressure lowering] agents in the management of dementia,” write the authors.
“Although the differences were small and of uncertain clinical significance, if sustained over years, the compounding effects may well have significant clinical benefits,” they add.
They caution, however, that recent evidence indicates that ACE inhibitors may be harmful in some cases, so if larger studies confirm that they work well in dementia, it may be only certain groups of patients with the condition who stand to benefit.
The results of a new study by neurological researchers at Rush University Medical Center show that a sudden decrease of testosterone, the male sex hormone, may cause Parkinson’s like symptoms in male mice. The findings were recently published in the Journal of Biological Chemistry.

One of the major roadblocks for discovering drugs against Parkinson’s disease is the unavailability of a reliable animal model for this disease.
“While scientists use different toxins and a number of complex genetic approaches to model Parkinson’s disease in mice, we have found that the sudden drop in the levels of testosterone following castration is sufficient to cause persistent Parkinson’s like pathology and symptoms in male mice,” said Dr. Kalipada Pahan, lead author of the study and the Floyd A. Davis endowed professor of neurology at Rush. “We found that the supplementation of testosterone in the form of 5-alpha dihydrotestosterone (DHT) pellets reverses Parkinson’s pathology in male mice.”
“In men, testosterone levels are intimately coupled to many disease processes,” said Pahan. Typically, in healthy males, testosterone level is the maximum in the mid-30s, which then drop about one percent each year. However, testosterone levels may dip drastically due to stress or sudden turn of other life events, which may make somebody more vulnerable to Parkinson’s disease.
“Therefore, preservation of testosterone in males may be an important step to become resistant to Parkinson’s disease,” said Pahan.
Understanding how the disease works is important to developing effective drugs that protect the brain and stop the progression of Parkinson’s disease. Nitric oxide is an important molecule for our brain and the body.
"However, when nitric oxide is produced within the brain in excess by a protein called inducible nitric oxide synthase, neurons start dying,” said Pahan.
“This study has become more fascinating than we thought,” said Pahan. “After castration, levels of inducible nitric oxide synthase (iNOS) and nitric oxide go up in the brain dramatically. Interestingly, castration does not cause Parkinson’s like symptoms in male mice deficient in iNOS gene, indicating that loss of testosterone causes symptoms via increased nitric oxide production.”
“Further research must be conducted to see how we could potentially target testosterone levels in human males in order to find a viable treatment,” said Pahan.
Other researchers at Rush involved in this study were Saurabh Khasnavis, PhD, student, Anamitra Ghosh, PhD, student, and Avik Roy, PhD, research assistant professor.
This research was supported by a grant from the National Institutes of Health that received the highest score for its scientific merit in the particular cycle it was reviewed.
Parkinson’s is a slowly progressive disease that affects a small area of cells within the mid-brain known as the substantia nigra. Gradual degeneration of these cells causes a reduction in a vital chemical neurotransmitter, dopamine. The decrease in dopamine results in one or more of the classic signs of Parkinson’s disease that includes resting tremor on one side of the body; generalized slowness of movement; stiffness of limbs and gait or balance problems. The cause of the disease is unknown. Both environmental and genetic causes of the disease have been postulated.
Parkinson’s disease affects about 1.2 million patients in the United States and Canada. Although 15 percent of patients are diagnosed before age 50, it is generally considered a disease that targets older adults, affecting one of every 100 persons over the age of 60. This disease appears to be slightly more common in men than women.
Researchers studying a type of cell found in the trillions in our brain have made an important discovery as to how it responds to brain injury and disease such as stroke. A University of Bristol team has identified proteins which trigger the processes that underlie how astrocyte cells respond to neurological trauma.
The star-shaped astrocytes, which outnumber neurons in humans, are a type of glial cell that comprise one of two main categories of cell found in the brain along with neurons. The cells, which have branched extensions that reach synapses (the connections between neurons) blood vessels, and neighbouring astrocytes, play a pivotal role in almost all aspects of brain function by supplying physical and nutritional support for neurons. They also contribute to the communication between neurons and the response to injury.
However, the cells are also known to trigger both beneficial and detrimental effects in response to neurological trauma. When the brain is subjected to injury or disease, the cells react in a number of ways, including a change in shape. In severe cases, the altered cells form a scar, which is thought to have beneficial, as well as detrimental effects by allowing prompt repair of the blood-brain barrier, and limiting cell death, but also impairing the regeneration of nerve fibres and the effective incorporation of neuronal grafts - where additional neuronal cells are added to the injured site.
The cells change shape via the regulation of a structural component of the cell called the actin cytoskeleton, which is made up of filaments that shrink and grow to physically manoeuvre parts of the cell. In the lab, the team cultured astrocytes in a dish and were able to make them change shape by chemically or genetically manipulating proteins that control actin, and also by mimicking the environment that the cells would be exposed to during a stroke.
By doing so the team found that very dramatic changes in cell shape were caused by controlling the actin cytoskeleton in the in vitro stroke model. The team also identified additional protein molecules that control this process, suggesting that a complex mechanism is involved.
Dr Jonathan Hanley from the University’s School of Biochemistry said: “Our findings are crucial to our understanding of how the brain responds to many disorders that affect millions of people every year. Until now, the details of the actin-based mechanisms that control astrocyte morphology were unknown, so we anticipate that our work will lead to future discoveries about this important process.”
Biologists at The Scripps Research Institute (TSRI) have made a significant discovery that could lead to a new therapeutic strategy for Parkinson’s disease.
The findings, recently published online ahead of print in the journal Molecular and Cell Biology, focus on an enzyme known as parkin, whose absence causes an early-onset form of Parkinson’s disease. Precisely how the loss of this enzyme leads to the deaths of neurons has been unclear. But the TSRI researchers showed that parkin’s loss sharply reduces the level of another protein that normally helps protect neurons from stress.
“We now have a good model for how parkin loss can lead to the deaths of neurons under stress,” said TSRI Professor Steven I. Reed, who was senior author of the new study. “This also suggests a therapeutic strategy that might work against Parkinson’s and other neurodegenerative diseases.”
Genetic Clues
Parkinson’s is the world’s second-most common neurodegenerative disease, affecting about one million people in the United States alone. The disease is usually diagnosed after the appearance of the characteristic motor symptoms, which include tremor, muscle rigidity and slowness of movements. These symptoms are caused by the loss of neurons in the substantia nigra, a brain region that normally supplies the neurotransmitter dopamine to other regions that regulate muscle movements.
Most cases of Parkinson’s are considered “sporadic” and are thought to be caused by a variable mix of factors including advanced age, subtle genetic influences, chronic neuroinflammation and exposure to pesticides and other toxins. But between 5 and 15 percent of cases arise specifically from inherited gene mutations. Among these, mutations to the parkin gene are relatively common. Patients who have no functional parkin gene typically develop Parkinson’s-like symptoms before age 40.
Parkin belongs to a family of enzymes called ubiquitin ligases, whose main function is to regulate the levels of other proteins. They do so principally by “tagging” their protein targets with ubiquitin molecules, thus marking them for disposal by roving protein-breakers in cells known as proteasomes. Because parkin is a ubiquitin ligase, researchers have assumed that its absence allows some other protein or proteins to evade proteasomal destruction and thus accumulate abnormally and harm neurons. But since 1998, when parkin mutations were first identified as a cause of early-onset Parkinson’s, consensus about the identity of this protein culprit has been elusive.
“There have been a lot of theories, but no one has come up with a truly satisfactory answer,” Reed said.
Oxidative Stress
In 2005, Reed and his postdoctoral research associate (and wife) Susanna Ekholm-Reed decided to investigate a report that parkin associates with another ubiquitin ligase known as Fbw7. “We soon discovered that parkin regulates Fbw7 levels by tagging it with ubiquitin and thus targeting it for degradation by the proteasome,” said Ekholm-Reed.
Loss of parkin, they found, leads to rises in Fbw7 levels, specifically for a form of the protein known as Fbw7β. The scientists observed these elevated levels of Fbw7β in embryonic mouse neurons from which parkin had been deleted, in transgenic mice that were born without the parkin gene, and even in autopsied brain tissue from Parkinson’s patients who had parkin mutations.
Subsequent experiments showed that when neurons are exposed to harmful molecules known as reactive oxygen species, parkin appears to work harder at tagging Fbw7β for destruction, so that Fbw7β levels fall. Without the parkin-driven decrease in Fbw7β levels, the neurons become more sensitive to this “oxidative stress”—so that more of them undergo a programmed self-destruction called apoptosis. Oxidative stress, to which dopamine-producing substantia nigra neurons may be particularly vulnerable, has long been considered a likely contributor to Parkinson’s.
“We realized that there must be a downstream target of Fbw7β that’s important for neuronal survival during oxidative stress,” said Ekholm-Reed.
A New Neuroprotective Strategy
The research slowed for a period due to a lack of funding. But then, in 2011, came a breakthrough. Other researchers who were investigating Fbw7’s role in cancer reported that it normally tags a cell-survival protein called Mcl-1 for destruction. The loss of Fbw7 leads to rises in Mcl-1, which in turn makes cells more resistant to apoptosis. “We were very excited about that finding,” said Ekholm-Reed. The TSRI lab’s experiments quickly confirmed the chain of events in neurons: parkin keeps levels of Fbw7β under control, and Fbw7β keeps levels of Mcl-1 under control. Full silencing of Mcl-1 leaves neurons extremely sensitive to oxidative stress.
Members of the team suspect that this is the principal explanation for how parkin mutations lead to Parkinson’s disease. But perhaps more importantly, they believe that their discovery points to a broad new “neuroprotective” strategy: reducing the Fbw7β-mediated destruction of Mcl-1 in neurons, which should make neurons more resistant to oxidative and other stresses.
“If we can find a way to inhibit Fbw7β in a way that specifically raises Mcl-1 levels, we might be able to prevent the progressive neuronal loss that’s seen not only in Parkinson’s but also in other major neurological diseases, such as Huntington’s disease and ALS [amyotrophic lateral sclerosis],” said Reed.
Finding such an Mcl-1-boosting compound, he added, is now a major focus of his laboratory’s work.
The potential impact of exposure to low levels of mercury on the developing brain – specifically by women consuming fish during pregnancy – has long been the source of concern and some have argued that the chemical may be responsible for behavioral disorders such as autism. However, a new study that draws upon more than 30 years of research in the Republic of Seychelles reports that there is no association between pre-natal mercury exposure and autism-like behaviors.

“This study shows no evidence of a correlation between low level mercury exposure and autism spectrum-like behaviors among children whose mothers ate, on average, up to 12 meals of fish each week during pregnancy,” said Edwin van Wijngaarden, Ph.D., an associate professor in the University of Rochester Medical Center’s (URMC) Department of Public Health Sciences and lead author of the study which appears online today in the journal Epidemiology. “These findings contribute to the growing body of literature that suggest that exposure to the chemical does not play an important role in the onset of these behaviors.”
The debate over fish consumption has long created a dilemma for expecting mothers and physicians. Fish are high in beneficial nutrients such as, selenium, vitamin E, lean protein, and omega-3 fatty acids; the latter are essential to brain development. At the same time, exposure to high levels of mercury has been shown to lead to developmental problems, leading to the claim that mothers are exposing their unborn children to serious neurological impairment by eating fish during pregnancy. Despite the fact that the developmental consequences of low level exposure remain unknown, some organizations, including the U.S. Food and Drug Administration, have recommended that pregnant women limit their consumption of fish.
The presence of mercury in the environment is widespread and originates from both natural sources such as volcanoes and as a byproduct of coal-fired plants that emit the chemical. Much of this mercury ends up being deposited in the world’s oceans where it makes its way into the food chain and eventually into fish. While the levels of mercury found in individual fish are generally low, concerns have been raised about the cumulative effects of a frequent diet of fish.
The Republic of Seychelles has proven to be the ideal location to examine the potential health impact of persistent low level mercury exposure. With a population of 87,000 people spread across an archipelago of islands in the Indian Ocean, fishing is a both an important industry and a primary source of nutrition – the nation’s residents consume fish at a rate 10 times greater than the populations of the U.S. and Europe.
The Seychelles Child Development Study – a partnership between URMC, the Seychelles Ministries of Health and Education, and the University of Ulster in Ireland – was created in the mid-1980s to specifically study the impact of fish consumption and mercury exposure on childhood development. The program is one of the largest ongoing epidemiologic studies of its kind.
“The Seychelles study was designed to follow a population over a very long period of time and focus on relevant mercury exposure,” said Philip Davidson, Ph.D., principal investigator of the Seychelles Child Development Study and professor emeritus in Pediatrics at URMC. “While the amount of fish consumed in the Seychelles is significantly higher than other countries in the industrialized world, it is still considered low level exposure.”
The autism study involved 1,784 children, adolescents, and young adults and their mothers. The researchers were first able to determine the level of prenatal mercury exposure by analyzing hair samples that had been collected from the mothers around the time of birth, a test which can approximate mercury levels found in the rest of the body including the growing fetus.
The researchers then used two questionnaires to determine whether or not the study participants were exhibiting autism spectrum-like behaviors. The Social Communication Questionnaire was completed by the children’s parents and the Social Responsiveness Scale was completed by their teachers. These tests – which include questions on language skills, social communication, and repetitive behaviors – do not provide a definitive diagnosis, but they are widely used in the U.S. as an initial screening tool and may suggest the need for additional evaluation.
The mercury levels of the mothers were then matched with the test scores of their children and the researchers found that there was no correlation between prenatal exposure and evidence of autism-spectrum-like behaviors. This is similar to the result of previous studies of the nation’s children which have measured language skills and intelligence, amongst other outcomes, and have not observed any adverse developmental effects.
The study lends further evidence to an emerging belief that the “good” may outweigh the possible “bad” when it comes to fish consumption during pregnancy. Specifically, if mercury does adversely influence child development at these levels of exposure then the benefits of the nutrients found in the fish may counteract or perhaps even supersede the potential negative effects of the mercury.
“This study shows no consistent association in children with mothers with mercury levels that were six to ten times higher than those found in the U.S. and Europe,” said Davidson. “This is a sentinel population and if it does not exist here than it probably does not exist.”
“NIEHS has been a major supporter of research looking into the human health risks associated with mercury exposure,” said Cindy Lawler, Ph.D., acting branch chief at the National Institute of Environmental Health Sciences, part of National Institutes of Health. “The studies conducted in the Seychelles Islands have provided a unique opportunity to better understand the relationship between environmental factors, such as mercury, and the role they may play in the development of diseases like autism. Although more research is needed, this study does present some good news for parents.”
New technique can rapidly turn genes on and off, helping scientists better understand their function.
Although human cells have an estimated 20,000 genes, only a fraction of those are turned on at any given time, depending on the cell’s needs — which can change by the minute or hour. To find out what those genes are doing, researchers need tools that can manipulate their status on similarly short timescales.
That is now possible, thanks to a new technology developed at MIT and the Broad Institute that can rapidly start or halt the expression of any gene of interest simply by shining light on the cells.
The work is based on a technique known as optogenetics, which uses proteins that change their function in response to light. In this case, the researchers adapted the light-sensitive proteins to either stimulate or suppress the expression of a specific target gene almost immediately after the light comes on.
“Cells have very dynamic gene expression happening on a fairly short timescale, but so far the methods that are used to perturb gene expression don’t even get close to those dynamics. To understand the functional impact of those gene-expression changes better, we have to be able to match the naturally occurring dynamics as closely as possible,” says Silvana Konermann, an MIT graduate student in brain and cognitive sciences.
The ability to precisely control the timing and duration of gene expression should make it much easier to figure out the roles of particular genes, especially those involved in learning and memory. The new system can also be used to study epigenetic modifications — chemical alterations of the proteins that surround DNA — which are also believed to play an important role in learning and memory.
Konermann and Mark Brigham, a graduate student at Harvard University, are the lead authors of a paper describing the technique in the July 22 online edition of Nature. The paper’s senior author is Feng Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering at MIT and a core member of the Broad Institute and MIT’s McGovern Institute for Brain Research.
Shining light on genes
The new system consists of several components that interact with each other to control the copying of DNA into messenger RNA (mRNA), which carries genetic instructions to the rest of the cell. The first is a DNA-binding protein known as a transcription activator-like effector (TALE). TALEs are modular proteins that can be strung together in a customized way to bind any DNA sequence.
Fused to the TALE protein is a light-sensitive protein called CRY2 that is naturally found in Arabidopsis thaliana, a small flowering plant. When light hits CRY2, it changes shape and binds to its natural partner protein, known as CIB1. To take advantage of this, the researchers engineered a form of CIB1 that is fused to another protein that can either activate or suppress gene copying.
After the genes for these components are delivered to a cell, the TALE protein finds its target DNA and wraps around it. When light shines on the cells, the CRY2 protein binds to CIB1, which is floating in the cell. CIB1 brings along a gene activator, which initiates transcription, or the copying of DNA into mRNA. Alternatively, CIB1 could carry a repressor, which shuts off the process.
A single pulse of light is enough to stimulate the protein binding and initiate DNA copying. The researchers found that pulses of light delivered every minute or so are the most effective way to achieve continuous transcription for the desired period of time. Within 30 minutes of light delivery, the researchers detected an uptick in the amount of mRNA being produced from the target gene. Once the pulses stop, the mRNA starts to degrade within about 30 minutes.
In this study, the researchers tried targeting nearly 30 different genes, both in neurons grown in the lab and in living animals. Depending on the gene targeted and how much it is normally expressed, the researchers were able to boost transcription by a factor of two to 200.
Karl Deisseroth, a professor of bioengineering at Stanford University and one of the inventors of optogenetics, says the most important innovation of the technique is that it allows control of genes that naturally occur in the cell, as opposed to engineered genes delivered by scientists.
“You could control, at precise times, a particular genetic locus and see how everything responds to that, with high temporal precision,” says Deisseroth, who was not part of the research team.
Epigenetic modifications
Another important element of gene-expression control is epigenetic modification. One major class of epigenetic effectors is chemical modification of the proteins, known as histones, that anchor chromosomal DNA and control access to the underlying genes. The researchers showed that they can also alter these epigenetic modifications by fusing TALE proteins with histone modifiers.
Epigenetic modifications are thought to play a key role in learning and forming memories, but this has not been very well explored because there are no good ways to disrupt the modifications, short of blocking histone modification of the entire genome. The new technique offers a much more precise way to interfere with modifications of individual genes.
“We want to allow people to prove the causal role of specific epigenetic modifications in the genome,” Zhang says.
So far, the researchers have demonstrated that some of the histone effector domains can be tethered to light-sensitive proteins; they are now trying to expand the types of histone modifiers they can incorporate into the system.
“It would be really useful to expand the number of epigenetic marks that we can control. At the moment we have a successful set of histone modifications, but there are a good deal more of them that we and others are going to want to be able to use this technology for,” Brigham says.
Novel microchips imitate the brain’s information processing in real time. Neuroinformatics researchers from the University of Zurich and ETH Zurich together with colleagues from the EU and US demonstrate how complex cognitive abilities can be incorporated into electronic systems made with so-called neuromorphic chips: They show how to assemble and configure these electronic systems to function in a way similar to an actual brain.

No computer works as efficiently as the human brain – so much so that building an artificial brain is the goal of many scientists. Neuroinformatics researchers from the University of Zurich and ETH Zurich have now made a breakthrough in this direction by understanding how to configure so-called neuromorphic chips to imitate the brain’s information processing abilities in real-time. They demonstrated this by building an artificial sensory processing system that exhibits cognitive abilities.
New approach: simulating biological neurons
Most approaches in neuroinformatics are limited to the development of neural network models on conventional computers or aim to simulate complex nerve networks on supercomputers. Few pursue the Zurich researchers’ approach to develop electronic circuits that are comparable to a real brain in terms of size, speed, and energy consumption. “Our goal is to emulate the properties of biological neurons and synapses directly on microchips,” explains Giacomo Indiveri, a professor at the Institute of Neuroinformatics (INI), of the University of Zurich and ETH Zurich.
The major challenge was to configure networks made of artificial, i.e. neuromorphic, neurons in such a way that they can perform particular tasks, which the researchers have now succeeded in doing: They developed a neuromorphic system that can carry out complex sensorimotor tasks in real time. They demonstrate a task that requires a short-term memory and context-dependent decision-making – typical traits that are necessary for cognitive tests. In doing so, the INI team combined neuromorphic neurons into networks that implemented neural processing modules equivalent to so-called “finite-state machines” – a mathematical concept to describe logical processes or computer programs. Behavior can be formulated as a “finite-state machine” and thus transferred to the neuromorphic hardware in an automated manner. “The network connectivity patterns closely resemble structures that are also found in mammalian brains,” says Indiveri.
Chips can be configured for any behavior modes
The scientists thus demonstrate for the first time how a real-time hardware neural-processing system where the user dictates the behavior can be constructed. “Thanks to our method, neuromorphic chips can be configured for a large class of behavior modes. Our results are pivotal for the development of new brain-inspired technologies,” Indiveri sums up. One application, for instance, might be to combine the chips with sensory neuromorphic components, such as an artificial cochlea or retina, to create complex cognitive systems that interact with their surroundings in real time.
Literature:
E. Neftci, J. Binas, U. Rutishauser, E. Chicca, G. Indiveri, R. J. Douglas. Synthesizing cognition in neuromorphic electronic systems. PNAS. July 22, 2013.
TAU research finds that breastfed children are less likely to develop ADHD later in life

We know that breastfeeding has a positive impact on child development and health — including protection against illness. Now researchers from Tel Aviv University have shown that breastfeeding could also help protect against Attention Deficit/Hyperactivity Disorder (ADHD), the most commonly diagnosed neurobehavioral disorder in children and adolescents.
Seeking to determine if the development of ADHD was associated with lower rates of breastfeeding, Dr. Aviva Mimouni-Bloch, of Tel Aviv University’s Sackler Faculty of Medicine and Head of the Child Neurodevelopmental Center in Loewenstein Hospital, and her fellow researchers completed a retrospective study on the breastfeeding habits of parents of three groups of children: a group that had been diagnosed with ADHD; siblings of those diagnosed with ADHD; and a control group of children without ADHD and lacking any genetic ties to the disorder.
The researchers found a clear link between rates of breastfeeding and the likelihood of developing ADHD, even when typical risk factors were taken into consideration. Children who were bottle-fed at three months of age were found to be three times more likely to have ADHD than those who were breastfed during the same period. These results have been published in Breastfeeding Medicine.
Understanding genetics and environment
In their study, the researchers compared breastfeeding histories of children from six to 12 years of age at Schneider’s Children Medical Center in Israel. The ADHD group was comprised of children that had been diagnosed at the hospital, the second group included the siblings of the ADHD patients, and the control group included children without neurobehavioral issues who had been treated at the clinics for unrelated complaints.
In addition to describing their breastfeeding habits during the first year of their child’s life, parents answered a detailed questionnaire on medical and demographic data that might also have an impact on the development of ADHD, including marital status and education of the parents, problems during pregnancy such as hypertension or diabetes, birth weight of the child, and genetic links to ADHD.
Taking all risk factors into account, researchers found that children with ADHD were far less likely to be breastfed in their first year of life than the children in the other groups. At three months, only 43 percent of children in the ADHD group were breastfed compared to 69 percent of the sibling group and 73 percent of the control group. At six months, 29 percent of the ADHD group was breastfed, compared to 50 percent of the sibling group and 57 percent of the control group.
One of the unique elements of the study was the inclusion of the sibling group, says Dr. Mimouni-Bloch. Although a mother will often make the same breastfeeding choices for all her children, this is not always the case. Some children’s temperaments might be more difficult than their siblings’, making it hard for the mother to breastfeed, she suggests.
Added protection
While researchers do not yet know why breastfeeding has an impact on the future development of ADHD — it could be due to the breast milk itself, or the special bond formed between mother and baby during breastfeeding, for example — they believe this research shows that breastfeeding can have a protective effect against the development of the disorder, and can be counted as an additional biological advantage for breastfeeding.
Dr. Mimouni-Bloch hopes to conduct a further study on breastfeeding and ADHD, examining children who are at high risk for ADHD from birth and following up in six-month intervals until six years of age, to obtain more data on the phenomenon.
Finding shows oxytocin strengthens bad memories and can increase fear and anxiety
It turns out the love hormone oxytocin is two-faced. Oxytocin has long been known as the warm, fuzzy hormone that promotes feelings of love, social bonding and well-being. It’s even being tested as an anti-anxiety drug. But new Northwestern Medicine® research shows oxytocin also can cause emotional pain, an entirely new, darker identity for the hormone.
Oxytocin appears to be the reason stressful social situations, perhaps being bullied at school or tormented by a boss, reverberate long past the event and can trigger fear and anxiety in the future.
That’s because the hormone actually strengthens social memory in one specific region of the brain, Northwestern scientists discovered.
If a social experience is negative or stressful, the hormone activates a part of the brain that intensifies the memory. Oxytocin also increases the susceptibility to feeling fearful and anxious during stressful events going forward.
(Presumably, oxytocin also intensifies positive social memories and, thereby, increases feelings of well being, but that research is ongoing.)
The findings are important because chronic social stress is one of the leading causes of anxiety and depression, while positive social interactions enhance emotional health. The research, which was done in mice, is particularly relevant because oxytocin currently is being tested as an anti-anxiety drug in several clinical trials.
“By understanding the oxytocin system’s dual role in triggering or reducing anxiety, depending on the social context, we can optimize oxytocin treatments that improve well-being instead of triggering negative reactions,” said Jelena Radulovic, the senior author of the study and the Dunbar Professsor of Bipolar Disease at Northwestern University Feinberg School of Medicine. The paper was published July 21 in Nature Neuroscience.
This is the first study to link oxytocin to social stress and its ability to increase anxiety and fear in response to future stress. Northwestern scientists also discovered the brain region responsible for these effects — the lateral septum – and the pathway or route oxytocin uses in this area to amplify fear and anxiety.
The scientists discovered that oxytocin strengthens negative social memory and future anxiety by triggering an important signaling molecule — ERK (extracellular signal regulated kinases) — that becomes activated for six hours after a negative social experience. ERK causes enhanced fear, Radulovic believes, by stimulating the brain’s fear pathways, many of which pass through the lateral septum. The region is involved in emotional and stress responses.
The findings surprised the researchers, who were expecting oxytocin to modulate positive emotions in memory, based on its long association with love and social bonding.
“Oxytocin is usually considered a stress-reducing agent based on decades of research,” said Yomayra Guzman, a doctoral student in Radulovic’s lab and the study’s lead author. “With this novel animal model, we showed how it enhances fear rather than reducing it and where the molecular changes are occurring in our central nervous system.’
The new research follows three recent human studies with oxytocin, all of which are beginning to offer a more complicated view of the hormone’s role in emotions.
All the new experiments were done in the lateral septum. This region has the highest oxytocin levels in the brain and has high levels of oxytocin receptors across all species from mice to humans.
“This is important because the variability of oxytocin receptors in different species is huge,” Radulovic said. “We wanted the research to be relevant for humans, too.”
Experiments with mice in the study established that 1) oxytocin is essential for strengthening the memory of negative social interactions and 2) oxytocin increases fear and anxiety in future stressful situations.
Experiment 1: Oxytocin Strengthens Bad Memories
Three groups of mice were individually placed in cages with aggressive mice and experienced social defeat, a stressful experience for them. One group was missing its oxytocin receptors, essentially the plug by which the hormone accesses brain cells. The lack of receptors means oxytocin couldn’t enter the mice’s brain cells. The second group had an increased number of receptors so their brain cells were flooded with the hormone. The third control group had a normal number of receptors.
Six hours later, the mice were returned to cages with the aggressive mice. The mice that were missing their oxytocin receptors didn’t appear to remember the aggressive mice and show any fear. Conversely, when mice with increased numbers of oxytocin receptors were reintroduced to the aggressive mice, they showed an intense fear reaction and avoided the aggressive mice.
Experiment 2: Oxytocin Increases Fear and Anxiety in Future Stress
Again, the three groups of mice were exposed to the stressful experience of social defeat in the cages of other more aggressive mice. This time, six hours after the social stress, the mice were put in a box in which they received a brief electric shock, which startles them but is not painful. Then 24 hours later, the mice were returned to the same box but did not receive a shock.
The mice missing their oxytocin receptors did not show any enhanced fear when they re-entered the box in which they received the shock. The second group, which had extra oxytocin receptors showed much greater fear in the box. The third control group exhibited an average fear response.
“This experiment shows that after a negative social experience the oxytocin triggers anxiety and fear in a new stressful situation,” Radulovic said.
For the first time scientists have identified how a pathway in the brain which is unique to humans allows us to learn new words.

The average adult’s vocabulary consists of about 30,000 words. This ability seems unique to humans as even the species closest to us - chimps - manage to learn no more than 100.
It has long been believed that language learning depends on the integration of hearing and repeating words but the neural mechanisms behind learning new words remained unclear. Previous studies have shown that this may be related to a pathway in the brain only found in humans and that humans can learn only words that they can articulate.
Now researchers from King’s College London Institute of Psychiatry, in collaboration with Bellvitge Biomedical Research Institute (IDIBELL) and the University of Barcelona, have mapped the neural pathways involved in word learning among humans. They found that the arcuate fasciculus, a collection of nerve fibres connecting auditory regions at the temporal lobe with the motor area located at the frontal lobe in the left hemisphere of the brain, allows the ‘sound’ of a word to be connected to the regions responsible for its articulation. Differences in the development of these auditory-motor connections may explain differences in people’s ability to learn words.
The results of the study are published in the journal Proceedings of the National Academy of Sciences (PNAS).
Dr Marco Catani, co-author from the NatBrainLab at King’s College London Institute of Psychiatry said: “Often humans take their ability to learn words for granted. This research sheds new light on the unique ability of humans to learn a language, as this pathway is not present in other species. The implications of our findings could be wide ranging – from how language is taught in schools and rehabilitation from injury, to early detection of language disorders such as dyslexia. In addition these findings could have implications for other disorders where language is affected such as autism and schizophrenia.”
The study involved 27 healthy volunteers. Researchers used diffusion tensor imaging to image the structure of the brain before a word learning task and functional MRI, to detect the regions in the brain that were most active during the task. They found a strong relationship between the ability to remember words and the structure of arcuate fasciculus, which connects two brain areas: the territory of Wernicke, related to auditory language decoding, and Broca’s area, which coordinates the movements associated with speech and the language processing.
In participants able to learn words more successfully their arcuate fasciculus was more myelinated i.e. the nervous tissue facilitated faster conduction of the electrical signal. In addition the activity between the two regions was more co-ordinated in these participants.
Dr Catani concludes, “Now we understand that this is how we learn new words, our concern is that children will have less vocabulary as much of their interaction is via screen, text and email rather than using their external prosthetic memory. This research reinforces the need for us to maintain the oral tradition of talking to our children.”
Multiple sclerosis treatments that repair damage to the brain could be developed thanks to new research.
A study has shed light on how cells are able to regenerate protective sheaths around nerve fibres in the brain.
These sheaths, made up of a substance called myelin, are critical for the quick transmission of nerve signals, enabling vision, sensation and movement, but break down in patients with multiple sclerosis (MS).
In multiple sclerosis patients, the protective layer surrounding nerve fibres is stripped away and the nerves are exposed and damaged.
-Dr Veronique Miron(MRC for Regenerative Medicine at the University of Edinburgh)
Macrophages
The study, by the Universities of Edinburgh and Cambridge, found that immune cells, known as macrophages, help trigger the regeneration of myelin.
Researchers found that following loss of or damage to myelin, macrophages can release a compound called activin-A, which activates production of more myelin.
Approved therapies for multiple sclerosis work by reducing the initial myelin injury – they do not promote myelin regeneration. This study could help find new drug targets to enhance myelin regeneration and help to restore lost function in patients with multiple sclerosis.
-Dr Veronique Miron (Medical Council Centre for Regenerative Medicine at the University of Edinburgh)
Study
The study, which looked at myelin regeneration in human tissue samples and in mice, is published in Nature Neuroscience.
It was funded by the MS Society, the Wellcome Trust and the Multiple Sclerosis Society of Canada.
Scientists now plan to start further research to look at how activin-A works and whether its effects can be enhanced.
We urgently need therapies that can help slow the progression of MS and so we’re delighted researchers have identified a new, potential way to repair damage to myelin. We look forward to seeing this research develop further.
-Dr Susan Kohlhaas (Head of Biomedical Research at the MS Society)
We are pleased to fund MS research that may lead to treatment benefits for people living with MS. We look forward to advances in treatments that address repair specifically, so that people with MS may be able to manage the unpredictable symptoms of the disease.
-Dr Karen Lee (Vice-President, Research at the MS Society of Canada
Recycling is not only good for the environment, it’s good for the brain. A study using rat cells indicates that quickly clearing out defective proteins in the brain may prevent loss of brain cells.

Results of a study in Nature Chemical Biology suggest that the speed at which damaged proteins are cleared from neurons may affect cell survival and may explain why some cells are targeted for death in neurodegenerative disorders. The research was supported by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health.
One of the mysteries surrounding neurodegenerative diseases is why some nerve cells are marked for destruction whereas their neighbors are spared. It is especially puzzling because the protein thought to be responsible for cell death is found throughout the brain in many of these diseases, yet only certain brain areas or cell types are affected.
In Huntington’s disease and many other neurodegenerative disorders, proteins that are misfolded (have abnormal shapes), accumulate inside and around neurons and are thought to damage and kill nearby brain cells. Normally, cells sense the presence of malformed proteins and clear them away before they do any damage. This is regulated by a process called proteostasis, which the cell uses to control protein levels and quality.
In the study, Andrey S. Tsvetkov and his colleagues from the University of California, San Francisco (UCSF) and Duke University, Durham, N.C., showed that differences in the rate of proteostasis may be the clue to understanding why certain nerve cells die in Huntington’s, a genetic brain disorder that leads to uncontrolled movements and death.
To measure how quickly proteins are cleared away from cells, the researchers developed a new technique called optical pulse-labeling, allowing them to follow specific proteins in individual living cells. To test the technique, they grew brain cells in a dish and turned on Dendra2, a photoswitchable protein that glows from green to red after being hit by a specific type of light. Both the red and green glow can be followed until the protein is cleared from the cell. In this way, the researchers could track the lifetime of newly produced Dendra2 (which glows green) and older, photoswitched Dendra2 (which glows red) until the protein was cleared away from the cell.
"Before this new technique, there was no way to look at individual neurons and their capacity to handle proteins. This method provides a real-time readout of how fast proteins are turned over in neurons and gives us a look at some of the mechanisms involved," said Margaret Sutherland, Ph.D., program director at NINDS.
The researchers followed Dendra2 in a set of striatal neurons, which they obtained from rats. The striatum (where striatal neurons are located) is a brain region involved in a number of brain functions including planning movements and is most heavily affected in Huntington’s disease. They discovered that the mean lifetime of the protein (how long it remained in the cell) varied three- to fourfold, suggesting that rates of proteostasis were different among individual neurons. In other words, some cells may process an identical protein much slower than others.
Then, the researchers investigated how cells deal with different forms of huntingtin, the protein involved in Huntington’s. They fused Dendra2 on the end of a normal or mutant version of huntingtin to track how long the protein remained in cells. The mutant version of huntingtin is longer, and contains three building blocks of the protein repeated an abnormal number of times. These repeats in huntingtin are what cause it to misfold, eventually leading to neuron death and the symptoms of the disease. As predicted, in their experiments, the mutant form of huntingtin caused more rat cells to die than did the normal form of the protein.
The researchers found that the amount of time the mutant protein remained in the cell predicted neuronal survival: shorter mean lifetimes of mutant huntingtin were associated with longer neuronal survival. A shorter mean lifetime indicates that a protein does not remain in the cell for a long time, and that proteostasis is working effectively to clear it away. This suggests that improving proteostasis in Huntington’s brains may improve neuronal survival.
To test this idea, the researchers activated Nrf2, a protein known to regulate protein processing. When Nrf2 was turned on, the mean lifetime of huntingtin was shortened, and the neuron lived longer.
"Nrf2 seems like a potentially exciting therapeutic target. It is profoundly neuroprotective in our Huntington’s model and it accelerates the clearance of mutant huntingtin," said Dr. Steven Finkbeiner, senior author of the paper.
Although both striatal and cortical neurons are affected by mutant huntingtin, striatal neurons are more susceptible to cell death. The investigators found that striatal neurons were not as effective as cortical neurons in recognizing and clearing away the mutant protein.
"One surprising finding from these experiments was the significance of single cells’ ability to clear mutant huntingtin. It turned out that this ability largely predicted their susceptibility, whether that neuron came from the most vulnerable region of the brain – the striatum, or the cortex, which is less vulnerable," said Dr. Finkbeiner. The findings indicate that the toxicity of the damaged proteins may cause neurodegeneration by interfering with the proteostasis system, affecting how quickly they are cleared from neurons.
"The results should remind us that focusing on the disease-causing proteins is only one side of the coin. To understand why some cells die and others are spared, we may need to recognize that there are major, largely unrecognized cell-specific differences in the ways that various types of neurons recognize and dispose of disease-causing proteins," continued Dr. Finkbeiner.
The researchers explored potential mechanisms behind differences in proteostasis. One way that cells normally get rid of proteins is through autophagy — a process in which proteins are packed up into spheres and then broken down. Results in this paper suggested that neurons increased the rate of autophagy when they sensed that the mutant form of huntingtin was accumulating, indicating the autophagy system may be a drug target.
"These findings provide evidence that our brains have powerful coping mechanisms to deal with disease-causing proteins. The fact that some of these diseases don’t cause symptoms we can detect until the fourth or fifth decade of life, even when the gene has been present since birth, suggests that those mechanisms are pretty good," said Dr. Finkbeiner.
Future research is needed to determine why coping mechanisms fail as brain cells age and how neurons in the healthy brain keep the proteostasis system functioning.
"New research methods that help us understand how individual neurons function will increase our understanding of central nervous system disorders and help identify new treatments. It is critical to continue working on the methods such as those described in this paper," said Dr. Sutherland.
The development of new drugs for improving treatment of Alzheimer’s and Parkinson’s disease is a step closer after recent research into how stem cells migrate and form circuits in the brain.
The results from a study by researchers at The University of Auckland’s Centre for Brain Research may hold important clues into why there is less plasticity in brains affected by Parkinson’s and Alzheimer’s disease, and links to insulin resistance and diabetes.
The major five-year project to understand how stem cells start and stop migrating in the brain has also helped to unlock the secrets of how stem cells migrate during development and in adulthood.
The study revealed new information on how connectivity between brain cells is improved or worsened, says senior study author, Dr Maurice Curtis who conceived and directed the research. The experiments were carried out at the Centre for Brain Research laboratories by Dr Hector Monzo. Collaborators included a director of the CBR, Distinguished Professor Richard Faull, Dr Thomas Park, Dr Birger Dieriks, Deidre Jansson and Professor Mike Dragunow.
“We have begun testing new novel drug compounds that target how polysialic acid is removed from the cell in the hope of improving neuron connectivity,” says Dr Curtis.
He explains that stem cells in the brain are immature brain cells that must migrate from their birthplace to a position in the brain where they will connect with other brain cells, turn into adult brain cells (neurons) and become part of the brain’s circuitry.
“Even once the neuron has found its location, the neuron’s tentacles (or dendrites) need to forage to find other neurons to connect with to form circuits. This would be easy except that in the adult brain the cells are surrounded by a fairly rigid matrix (extracellular matrix) and so migration or foraging becomes almost impossible in this high friction environment.”
“The way the cell overcomes this ‘friction’ is by placing large amounts of a special slippery molecule called ‘polysialic acid-neural cell adhesion molecule’ onto the cell surface,” says Dr Curtis. “This allows the cell to migrate or forage with only a fraction of the friction it once had and this also reduces the energy requirements of the cell.”
Once the cell has migrated to its destination, the slippery coating is removed and the cell becomes locked in place ready to connect with other cells. In the case of the dendritic foraging, the polysialic acid must be removed in order for the dendrite to connect with another cell (synapse formation).
“We have known for at least 20 years that this process occurs but despite extensive studies by a number of groups internationally we have been in the dark about what controls this process,” he says. “Studies in my laboratory have demonstrated what happens to the slippery molecules once the cell no longer needs them.”
There were three possibilities for this process:
“For the past five years, we have systematically studied how this process is controlled,” says Dr Curtis. “Our findings have demonstrated that cells internalise the slippery molecule after receiving two specific cues.”
One of these cues is from collagen which makes up part of the rigid structure outside of the cell and the other is from a gaseous molecule called nitric oxide which triggers the outer membrane of the cell to internalise the slippery molecules.
“What we also discovered is that when there is an increased amount of insulin and insulin-like growth factor 1 (which has some similar functions to insulin) present in the culture, the cell cannot internalise the slippery molecules and instead they remain on the cell surface.”
“The key to the breakthrough was in determining that the process by which the polysialic acid is added to the cell surface was so persistent that it needed to be stopped in order to study how the polysialic acid was removed,” says Dr Curtis. “This required extensive trialling of many different cell growth conditions, enzyme concentrations and growing the cells in many different extracellular matrices.”
This is interesting because it is well known that in Parkinson’s disease and Alzheimer’s disease the brain is less sensitive to insulin, he says.
“In our studies in cells the insulin blocks the removal of polysialic acid and therefore the cell cannot connect properly and form synapses with other nearby cells.”
“This may hold major clues to why there is less plasticity in brains affected by Parkinson’s and Alzheimer’s disease in adults as well as helping to unlock the secrets of how stem cells migrate during development of the brain”, says Dr Curtis.
The Gus Fisher Postdoctoral Fellowship, the Auckland Medical Research Foundation and the Manchester Trust were the main sponsors of this research work.
The study results were published online this month in an ‘ahead of print’ version of The Journal of Neurochemistry.