This week over 150 neuroscientists were invited to meet in Arlington, Virginia to discuss the finer points of President Obama’s recently announced BRAIN Initative. Rather than discuss funding particulars, each participant was given the chance to broadly declare what they thought needed to be done in neuroscience. At least 75 of the participants initially responded to a request for a short white paper outlining the major obstacles currently impeding neuroscience research. A live webcast of some of the key talks was available, although many of the smaller workshops were held in private. Fortunately, updates regarding the content discussed at these workshops was posted live to twitter under the handle @openconnectome. This precipitated lively discussion, primarily under the hashtags #nsfBRAINmtg or #braini, and provided a way for a larger audience to be involved.
The working title of this inaugural NSF meeting was Physical and Mathematical Principles of Brain Structure and Function. In actuality, there was little discussion of all that, and for good reason—no such principles have been shown to exist. Even more concerning, only a few principles have ever even been proposed. Simplistic scaling laws dealing with connectivity, particularly within sensory systems or the cortex, have been suggested in the past. Generally they seek to account for only one or two structural parameters at a time, like for example, axon diameter and branching order. Typically, the chosen parameters are only considered in the context of optimizing a single physical variable, like for example, electrotonic function. While these efforts are a start, they usually do not garner much attention from the larger neuroscience community.
The early days of neuroscience were marked with the assertion of many principles and laws. They served well to focus ideas, but over time, they lost much of their original perceived generality. For example, concepts like one transmitter type per neuron, and no new neurons in adult brains later proved to have significant exceptions. The early breakthrough days in neuroscience have now given way to a grant system that stifles imagination, and by its competitiveness, encourages fraud. Many of the speakers at the BRAIN Initiative meeting have called for new tools and theories, but in most cases, they have offered only little has been offered. Instead of expanding the range acceptable pursuits, their vision appears to have imploded inward with calls for increased rigor, statistical power, diversity of animal models, experimental falsifiability, and most of all, data, on an increasingly limited range of ideas.
A lot of talk was given to the resolution at which connectivity, and activity maps should be detailed. Similar points were made for the need to develop electrode arrays of higher density and durability to more accurately record function. The ample discussion of an ideal animal model was punctuated by the notable advances made this year in whole brain recordings from Zebrafish, and also from large scale connectivity mapping now possible in small mammals with the new CLARITY transparent brain techniques. The general lack of agreement and clear path forward as to which organisms among many are ideal here was noted by representatives from several funding bodies who spoke at the meeting. Highlighting points made earlier in a talk by George Whitesides, they stressed the need to come to forward with a concrete plan that is comprehensible not only to the funding organizations, but the larger public as well.
Many discussions focused on brain mechanisms, like for example, how many neurons might contribute to a particular function. One participate, David Kleinfeld, called for a study of how many neurons are involved in communication at different scales. He also stressed the importance of looking at basic systems involving feedback, such as the brain stem and spinal cord, and their dynamic interaction with muscle. Michael Stryker observed that the goal should not be recording from the most neurons, and storing the most data, but rather finding the right neurons.
While it was not explicitly stated, a lot of the talk begged the conclusion that the answers to the questions we have will not be answered with animal studies. Knowing what a neuron does is itself an ill-posed question. In worms and flies, where the inputs and outputs of single neurons can be mapped to static sensory and motor functions in the real world, we might know what that neuron does. However in larger, human brains, we can ask an even better question—what does the neuron feel like? In most cases that answer will likely be, nothing.
If however, in a given human brain, a single neuron critically poised within that brain’s structural hierarchy can be stimulated to observable effect, some measure of its function has been gained. That effect might be a simple itch or twitch. Less plausibly perhaps it could be seeing a picture of a face undergo a change, sensing fear, or even imagining your grandmother. If that turns out not to be possible for most single neurons, we already know that we can find some minimal group of neurons where stimulation has uniquely perceivable effects.
While understanding the brain on different scales is important, the most rewarding endeavors likely exist where functionality can be correlated across those scales. Behavior at the scale of the organism within a given environment is readily observable. At the next scale down, the behavior of neurons witnessed by its spikes and structural alterations, is only observable now in part. Below the scale of the neuron, the mitochondria and other organelles move with a purpose and relation to activity of the neuron that has only been imagined, but is experimentally addressable.
Several speakers also mentioned the idea of a neural code. Spikes are a convenient metric for assessing brain activity, and we should seek to correlate their occurrence with behaviors on various scales mentioned above. They are a universal and non-local currency, among others in the brain, that inflates rapidly with stimulation and arousal. Unfortunately, the most logical conclusion for us must be that there is no code for spikes. Anyone attempting to observe and record a code for one neuron would probably find that it has, in short order, become unrecognizable, particularly in the context of the next. There are however constraints on spikes, and on neurons, and while considerable mention of the word was made at the meeting none were detailed in depth.
To formulate constraints on a system, at a level we don’t understand, we might look at constraints on other systems that we have some knowledge about. Neurons are neither wholly like ants, nor tress, but share some aspects of both. Similarly brains are neither like ant colonies, or forests, but shares some features in common. The most obvious constraint that comes to mind, and applies to these systems at every level, is energy. A subtle refinement of that is the concept of entropy generation. One key idea is that entropy generation at different scales, while proceeding according to as yet determined laws, need not necessarily maximize entropy at each point in time, but rather along paths through time.
A voice heard throughout the conference was that of Bill Bialek who diffusely observed that attempts to apply the laws of statistical mechanics to aspects of brain functions are not very productive because the brain is not at an equilibrium state. That would have been a good sentence to begin the conference perhaps rather than end it. Hopefully, the next NSF meeting will be a little more transparent to the public than the first. A more thorough webcast, with uploading to a media channel would be desirable to many who like to participate, as would a path for two-way communication on the issues. Mention should also be made of the efforts of a few neuroscientists peripheral to the BRAIN Initiative that have been maintaining important blog discussions, and metablog publication lists to track the progress made over last few months. This morning, NIH announced a new website has just been set up to provide additional public feedback.
Johns Hopkins researchers believe they may have discovered an explanation for the sleepless nights associated with restless legs syndrome (RLS), a symptom that persists even when the disruptive, overwhelming nocturnal urge to move the legs is treated successfully with medication.

Neurologists have long believed RLS is related to a dysfunction in the way the brain uses the neurotransmitter dopamine, a chemical used by brain cells to communicate and produce smooth, purposeful muscle activity and movement. Disruption of these neurochemical signals, characteristic of Parkinson’s disease, frequently results in involuntary movements. Drugs that increase dopamine levels are mainstay treatments for RLS, but studies have shown they don’t significantly improve sleep. An estimated 5 percent of the U.S. population has RLS.
The small new study, headed by Richard P. Allen, Ph.D., an associate professor of neurology at the Johns Hopkins University School of Medicine, used MRI to image the brain and found glutamate — a neurotransmitter involved in arousal — in abnormally high levels in people with RLS. The more glutamate the researchers found in the brains of those with RLS, the worse their sleep.
The findings are published in the May issue of the journal Neurology.
“We may have solved the mystery of why getting rid of patients’ urge to move their legs doesn’t improve their sleep,” Allen says. “We may have been looking at the wrong thing all along, or we may find that both dopamine and glutamate pathways play a role in RLS.”
For the study, Allen and his colleagues examined MRI images and recorded glutamate activity in the thalamus, the part of the brain involved with the regulation of consciousness, sleep and alertness. They looked at images of 28 people with RLS and 20 people without. The RLS patients included in the study had symptoms six to seven nights a week persisting for at least six months, with an average of 20 involuntary movements a night or more.
The researchers then conducted two-day sleep studies in the same individuals to measure how much rest each person was getting. In those with RLS, they found that the higher the glutamate level in the thalamus, the less sleep the subject got. They found no such association in the control group without RLS.
Previous studies have shown that even though RLS patients average less than 5.5 hours of sleep per night, they rarely report problems with excessive daytime sleepiness. Allen says the lack of daytime sleepiness is likely related to the role of glutamate, too much of which can put the brain in a state of hyperarousal — day or night.
If confirmed, the study’s results may change the way RLS is treated, Allen says, potentially erasing the sleepless nights that are the worst side effect of the condition. Dopamine-related drugs currently used in RLS do work, but many patients eventually lose the drug benefit and require ever higher doses. When the doses get too high, the medication actually can make the symptoms much worse than before treatment. Scientists don’t fully understand why drugs that increase the amount of dopamine in the brain would work to calm the uncontrollable leg movement of RLS.
Allen says there are already drugs on the market, such as the anticonvulsive gabapentin enacarbil, that can reduce glutamate levels in the brain, but they have not been given as a first-line treatment for RLS patients.
RLS wreaks havoc on sleep because lying down and trying to relax activates the symptoms. Most people with RLS have difficulty falling asleep and staying asleep. Only getting up and moving around typically relieves the discomfort. The sensations range in severity from uncomfortable to irritating to painful.
“It’s exciting to see something totally new in the field — something that really makes sense for the biology of arousal and sleep,” Allen says.
As more is understood about this neurobiology, the findings may not only apply to RLS, he says, but also to some forms of insomnia.
When animals are on the hunt for food they likely use many senses, and scientists have wondered how the different senses work together.

New research from the laboratory of CSHL neuroscientist and Assistant Professor Adam Kepecs shows that when rats actively use the senses of smell (sniffing) and touch (through their whiskers) those two processes are locked in synchronicity. The team’s paper, published today in the Journal of Neuroscience, shows that sniffing and “whisking” movements are synchronized even when they are running at different frequencies.
Studies in the 1960s suggested these two sensory activities were coordinated: sniffing, a sharp, profound intake of air; and whisking, the back-and-forth movement of the whiskers to sample the near environment, akin to the sensation of touch as felt through the fingers in humans. Such coordination could be important for decisions that depend on multiple types of sensory information, for instance, locating food. “The question is how two very different streams of sensory information, touch and smell, are integrated into a single multisensory “snapshot” of the environment,” says Kepecs.
These snapshots can be taken at high frequency, up to 12 times a second. To determine whether these two sensorimotor rhythms are indeed phase-locked, Kepecs’ team, including postdocs Sachin Ranade and Balázs Hangya, simultaneously monitored sniffing and whisking in rats freely foraging for food pellets.
At different frequencies occurring between 4-12 times per second they found strong 1:1 phase locking — in other words, every time the rats extended their whiskers to feel their vicinity, they also smelled it. Surprisingly, they found even when the sniffing and whisking rhythms operating at different fundamental frequencies they were locked in phase. Key to this is that the phases of the sensory input – the start of inhalation and onset of whisking – are aligned, which facilitates multisensory integration.
This is similar to how a person’s breathing rhythm settles into place while running and is synchronized to the steps. In both cases, the coordination could be advantageous in terms of energy efficiency. A crucial difference, though, is that in humans, the breathing rate has to catch up to the running rhythm after changes in pace, while for sniffing and whisking in rats they lock into phase immediately.
Even though human behavior doesn’t seem to be overtly tied to rhythms, there are hints that it could be. “Underneath the smoothly executed movements of humans there are rhythm generators, which are sometimes revealed in some diseases, for example the tremors seen in Parkinson’s disease, or in the brain waves that result from the synchronized firing of neurons,” says Kepecs. Studying the rhythms of multisensory inputs in rodents could provide clues to a fundamental principle underlying sensory and brain rhythms that are essential to all animals, including humans.
Ever since its introduction in the 1990s, the “clot-busting” drug tPA has been considered a “double-edged sword” for people experiencing a stroke. It can help restore blood flow to the brain, but it also can increase the likelihood of deadly hemorrhage. In fact, many people experiencing a stroke do not receive tPA because the window for giving the drug is limited to the first few hours after a stroke’s onset.

But Emory neurologist Manuel Yepes may have found a way to open that window. Even when its clot-dissolving powers are removed, tPA can still protect brain cells in animals from the loss of oxygen and glucose induced by a stroke, Yepes’ team reported in the Journal of Neuroscience (July 2012).
"We may have been giving the right medication, for the wrong reason," Yepes says. "tPA is more than a clot-busting drug. It functions naturally as a neuroprotectant."
The finding suggests that a modified version of the drug could provide benefits to patients who have experienced a stroke, without increasing the risk of bleeding.
"This would be a major breakthrough in the care of patients with stroke, if it could be developed," says Michael Frankel, director of the Marcus Stroke and Neuroscience Center at Grady Memorial Hospital.
tPA is a protein produced by the body and has several functions. One is to activate the enzyme plasmin, which breaks down clots. But Yepes’ team has discovered that the protein has additional functions. For example, in cultured neurons, it appears to protect neurons in the brain, turning on a set of genes that help cells deal with a lack of oxygen and glucose. This result contradicts previous reports that the protein acts as a neurotoxin in the nervous system.
Tweaking tPA so that it is unable to activate plasmin—while keeping intact the rest of its functions—allowed the researchers to preserve its protective effect on neurons in culture. This modified tPA also reduced the size of the damaged area of the brain after simulated stroke in mice, with an effect comparable in strength to regular tPA. The next step is to test the modified version of tPA in a pilot clinical trial.
The possibility that tPA may be working as a neuroprotectant may explain why, in large clinical studies, tPA’s benefits sometimes go unobserved until several weeks after treatment, Yepes says. “If it was just a matter of the clot, getting rid of the clot should make the patient better quickly,” he says. “It’s been difficult to explain why you should have to wait three months to see a benefit.”
Scientists at the Virginia Tech Carilion Research Institute have discovered how the predominant class of Alzheimer’s pharmaceuticals might sharpen the brain’s performance.
One factor even more important than the size of a television screen is the quality of the signal it displays. Having a life-sized projection of Harry Potter dodging a Bludger in a Quidditch match is of little use if the details are lost to pixilation.
The importance of transmitting clear signals, however, is not relegated to the airwaves. The same creed applies to the electrical impulses navigating a human brain. Now, new research has shown that one of the few drugs approved for the treatment of Alzheimer’s disease helps patients by clearing up the signals coming in from the outside world.
The discovery was made by a team of researchers led by Rosalyn Moran, an assistant professor at the Virginia Tech Carilion Research Institute. Her study indicates that cholinesterase inhibitors — a class of drugs that stop the breakdown of the neurotransmitter acetylcholine — allow signals to enter the brain with more precision and less background noise.
“Increasing the levels of acetylcholine appears to turn your fuzzy, old analog TV signal into a shiny, new, high-definition one,” said Moran, who holds an appointment as an assistant professor in the Virginia Tech College of Engineering. “And the drug does this in the sensory cortices. These are the workhorses of the brain, the gatekeepers, not the more sophisticated processing regions — such as the prefrontal cortex — where one may have expected the drugs to have their most prominent effect.”
Alzheimer’s disease affects more than 35 million people worldwide — a number expected to double every 20 years, leading to more than 115 million cases by 2050. Of the five pharmaceuticals approved to treat the disease by the U.S. Food and Drug Administration, four are cholinesterase inhibitors. Although it is clear that the drugs increase the amount of acetylcholine in the brain, why this improves Alzheimer’s symptoms has been unknown. If scientists understood the mechanisms and pathways responsible for improvement, they might be able to tailor better drugs to combat the disease, which costs more than $200 billion annually in the United States alone.
In the new study, Moran recruited 13 healthy young adults and gave them doses of galantamine, one of the cholinesterase inhibitors commonly prescribed to Alzheimer’s patients. Two electroencephalographs were taken — one with the drugs and one without — as the participants listened to a series of modulating tones while focusing on a simple concentration task.
The researchers were looking for differences in neural activity between the two drug states in response to surprising changes in the sound patterns that the participants were hearing.
The scientists compared the results with computer models built on a Bayesian brain theory, known as the Free Energy Principle, which is a leading theory that describes the basic rules of neuronal communication and explains the creation of complex networks.
The theory hypothesizes that neurons seek to reduce uncertainty, which can be modeled and calculated using free energy molecular dynamics. Connecting tens of thousands of neurons behaving in this manner produces the probability machine that we call a brain.
Moran and her colleagues compiled 10 computer simulations based on the different effects that the drugs could have on the brain. The model that best fit the results revealed that the low-level wheels of the brain early on in the neural networking process were the ones benefitting from the drugs and creating clearer, more precise signals.
“When people take these drugs you can imagine the brain bathed in them,” Moran said. “But what we found is that the drugs don’t have broad-stroke impacts on brain activity. Instead, they are working very specifically at the cortex’s entry points, gating the signals coming into the network in the first place.”
The study appears in Wednesday’s (May 8) issue of The Journal of Neuroscience in the article, “Free Energy, Precision and Learning: The Role of Cholinergic Neuromodulation.”
Stumped for years by a natural filter in the body that allows few substances, including life-saving drugs, to enter the brain through the bloodstream, physicians who treat neurological diseases may soon have a new pathway to the organ via a technique developed by a physicist and an immunologist working together at Florida International University’s Herbert Wertheim College of Medicine.

The FIU researchers developed the technique to deliver and fully release the anti-HIV drug AZTTP into the brain, but their finding has the potential to also help patients who suffer from neurological diseases such as Alzheimer’s, Parkinson’s and epilepsy, as well as cancer.
“Anything where you have trouble getting drugs to the brain and releasing it, this opens so many opportunities,’’ said Madhavan Nair, an FIU professor and chair of the medical school’s immunology department.
In an in vitro laboratory test with HIV-infected cells, Nair and a colleague, Sakhrat Khizroev, a professor of immunology and electrical engineering, attached the antiretroviral drug AZTTP to tiny, magneto-electric nanoparticles. Then, using magnetic energy, they guided the drug across a cell membrane created in the lab to mimic the blood-brain barrier found in the human body.
Once the drug reached its target, researchers triggered its release from the nanoparticle by zapping it with a low-energy electrical current. The drug remained functional and structurally sound after the release, according to the experiment findings.
“We learned to control electrical forces in the brain using magnetics,’’ said Khizroev, who designed, oversaw and supervised the entire project. “We pretty much opened a pathway to the brain.’’
The test findings were published in April in the online peer-reviewed journal, Nature Communications. Researchers believe that using this method will allow physicians to send a higher level of AZTTP — up to 97 percent more — to HIV-infected cells in the brain.
Currently, more than 99 percent of the antiretroviral therapies used to treat HIV, such as AZTTP, are deposited in the liver, lungs and other organs before they reach the brain.
While anti-viral drugs have helped HIV patients live longer by reducing their viral loads, the drugs cannot pass the blood-brain barrier in significant amounts, which allows the virus to lurk unchecked in the brain and can lead to neurological damage, said Dr. Cheryl Holder, a practicing physician and FIU professor who specializes in treating patients with HIV.
“We know that even though the viral load is undetectable in the blood, we don’t know what’s going on in the brain fully,’’ Holder said.
HIV causes constant inflammation, she said, and the virus can pool in areas of the brain where medicine cannot reach, potentially causing damage.
“It’s important to get the drug to the brain,’’ she said, “to help prevent dementia in older patients, and inflammation.’’
But the ability to target drug delivery and release it on demand in the brain has been impossible without opening the skull, Nair and Khizroev said.
Nair, an immunologist who specializes in HIV research, and Khizroev, an electrical engineer and physicist, began collaborating on the project about 18 months ago after winning a National Institutes of Health grant to study the use of magnetic particles.
One of the keys to success was controlling the release of the drug without adversely affecting the brain.
The researchers found their solution in the magneto-electric nanoparticles, which are uniquely suited to deliver and release drugs in the brain, Khizroev said. These nanoparticles can convert magnetic energy into the electrical energy needed to release the drugs without creating heat, which could potentially harm the brain.
The development of a new, less invasive pathway to the brain would open the door to many new medical uses.
Khizroev said he recently returned from a trip to the University of Southern California, where he briefed physicians at the medical school on the technique and its potential for cancer treatment. And Nair said he received a letter recently on behalf of a 91-year-old man suffering from Parkinson’s, asking when the technique might become available for use in people.
That may take a while. With the first phase of testing successfully completed using in vitro experiments, the second will take place at Emory University in Georgia, where researchers will test the technique on monkeys infected with the HIV virus.
If researchers complete the second phase successfully, clinical trials on humans could follow, Nair said. Approval from the Food and Drug Administration would be required before the technique becomes commercially available, he said.
FIU researchers have applied for a patent and would receive royalties, they said, though the university would benefit the most, in part because a successful research project could open opportunities for more grant funding on other topics.
For Khizroev, who had previously done research on quantum computing and information processing, the project has offered a way to put his scientific knowledge to use in a way that could have a direct affect on people’s health.
“I wanted to apply my knowledge of nanoparticles to something important,’’ he said.
Researchers at the Monell Center and collaborators have identified a protein that is critical to the ability of mammals to smell. Mice engineered to be lacking the Ggamma13 protein in their olfactory receptors were functionally anosmic – unable to smell. The findings may lend insight into the underlying causes of certain smell disorders in humans.
“Without Ggamma13, the mice cannot smell,” said senior author Liquan Huang, PhD, a molecular biologist at Monell. “This raises the possibility that mutations in the Ggamma13 gene may contribute to certain forms of human anosmia and that gene sequencing may be able to predict some instances of smell loss.”
Odor molecules entering the nose are sensed by a family of olfactory receptors. Inside the receptor cells, a complex cascade of molecular interactions converts information to ultimately generate an electrical signal. This signal, called an action potential, is what tells the brain that an odor has been detected.
To date, the identities of some of the intracellular molecules that convert odor information into an action potential remain a mystery. Suspecting that a protein called Ggamma13 might be involved, the research team engineered mice to be lacking this protein and then tested how the ‘knockout’ mice responded to odors.
Importantly, because the Ggamma13 protein plays critical roles in other parts of the body, the Ggamma13 ‘knockout’ was confined exclusively to smell receptor cells. This specificity allowed the researchers to characterize the effect of Ggamma13 deletion on the olfactory system without interference from changes in other tissues.
Both behavioral and physiological experiments revealed that the Ggamma13 knockout mice did not respond to odors. The findings were published in The Journal of Neuroscience.
In behavioral tests, control mice with an intact sense of smell were able to detect and retrieve a piece of buried food in less than 30 seconds. However, mice lacking Ggamma13 in their olfactory cells required more than 8 minutes to perform the same task. Both sets of mice were able to quickly locate the food when it was placed in plain sight.
A second set of experiments measured olfactory function on a physiological level. Using olfactory tissue from knockout and control mice, the researchers recorded electrical responses to 15 different odors. Responses from the Ggamma13 knockout mice were greatly reduced, suggesting that the olfactory receptors of these mice were unable to translate odor signals into an electrical response.
Together, the findings demonstrate that Ggamma13 is essential for mammals to smell odors and extend the current understanding of how olfactory receptor cells communicate information about odors to the brain. Future studies will seek to identify how Ggamma13 interacts with other molecules within the olfactory receptor.
“Loss of olfactory function can greatly reduce quality of life,” said Huang. “Our findings demonstrate the significant consequences when just one molecular component of this complex system does not function properly.”
Scientists from the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg have discovered that immune cells in the brain can produce a substance that prevents bacterial growth: namely itaconic acid.
Until now, biologists had assumed that only certain fungi produced itaconic acid. A team working with Dr. Karsten Hiller, head of the Metabolomics Group at LCSB and funded by the ATTRACT program of Luxembourg’s National Research Fund, and Dr. Alessandro Michelucci has now shown that even so-called microglial cells in mammals are also capable of producing this acid. “This is a ground breaking result,” says Prof. Dr. Rudi Balling, director of LCSB: “It is the first proof of an endogenous antibiotic in the brain.” The researchers have now published their results in the prestigious scientific journal PNAS.
Alessandro Michelucci is a cellular biologist, with focus on neurosciences. This is an ideal combination for LCSB with its focus on neurodegenerative diseases, and Parkinson’s disease especially – i.e. changes in the cells of the human nervous system. “Little is still known about the immune responses of the brain,” says Michelucci. “However, because we suspect there are connections between the immune system and Parkinson’s disease, we want to find out what happens in the brain when we trigger an immune response there.” For this purpose, Michelucci brought cell cultures of microglial cells, the immune cells in the brain, into contact with specific constituents of bacterial membranes. The microglial cells exhibited a response and produced a cocktail of metabolic products.
This cocktail was subsequently analysed by Karsten Hiller´s metabolomics group. Upon closer examination, the scientists discovered that production of one substance in particular - itaconic acid - was upregulated. “Itaconic acid plays a central role in the plastics production. Industrial bioreactors use fungi to mass-produce it,” says Hiller: ” The realisation that mammalian cells synthesise itaconic acid came as a major surprise.”
However, it was not known how mammalian cells can synthesise this compound. Through sequence comparisons of the fungi’s enzyme sequence to human protein sequences, Karsten Hiller then identified a human gene, which encodes a protein similar to the one in fungi: immunoresponsive gene 1, orIRG1for short – a most exciting discovery as the function of this gene was not known. Says Hiller: "When it comes toIRG1, there is a lot of uncharted territory. What we did know is that it seems to play some role in the big picture of the immune response, but what exactly that role was, we were not sure."
To change this situation, the team turned offIRG1in cell cultures and instead added the gene to cells that normally do not express it. The experiments confirmed that in mammals,IRG1codes for an itaconic acid-producing enzyme. But why? When immune cells like macrophages and microglial cells take up bacteria in order to inactivate them, the intruders are actually able to survive by using a special metabolic pathway called the glyoxylate shunt. According to Hiller, "macrophages produce itaconic acid in an effort to foil this bacterial survival strategy.The acid blocks the first enzyme in the glyoxylate pathway. Which is how macrophages partially inhibit growth in order to support the innate immune response and digest the bacteria they have taken up."
LCSB director Prof. Dr. Rudi Balling describes the possibilities that these insights offer: “Parkinson’s disease is highly complex and has many causes. We now intend to study the importance of infections of the nervous system in this respect – and whether itaconic acid can play a role in diagnosing and treating Parkinson’s disease.”
Research from King’s College London reveals the detailed mechanism behind how stress hormones reduce the number of new brain cells - a process considered to be linked to depression.

The researchers identified a key protein responsible for the long-term detrimental effect of stress on cells, and importantly, successfully used a drug compound to block this effect, offering a potential new avenue for drug discovery.
The study, published in Proceedings of the National Academy of Sciences (PNAS) was co-funded by the National Institute for Health Research Biomedical Research Centre (NIHR BRC) for Mental Health at the South London and Maudsley NHS Foundation Trust and King’s College London.
Depression affects approximately 1 in 5 people in the UK at some point in their lives. The World Health Organisation estimate that by 2030, depression will be the leading cause of the global burden of disease. Treatment for depression involves either medication or talking therapy, or usually a combination of both. Current antidepressant medication is successful in treating depression in about 50-65% of cases, highlighting the need for new, more effective treatments.
Depression and successful antidepressant treatment are associated with changes in a process called “neurogenesis”- the ability of the adult brain to continue to produce new brain cells. At a molecular level, stress is known to increase levels of cortisol (a stress hormone) which in turn acts on a receptor called the glucocorticoid receptor (GR). However, the exact mechanism explaining how the GR decreases neurogenesis in the brain has remained unclear.
Professor Carmine Pariante, from King’s College London’s Institute of Psychiatry and lead author of the paper, says: “With as much as half of all depressed patients failing to improve with currently available medications, developing new, more effective antidepressants is an important priority. In order to do this, we need to understand the abnormal mechanisms that we can target. Our study shows the importance of conducting research on cellular models, animal models and clinical samples, all under one roof in order to better facilitate the translation of laboratory findings to patient benefit.”
In this study, the multidisciplinary team of researchers studied cellular and animal models before confirming their findings in human blood samples. First, the researchers studied human hippocampal stem cells, which are the source of new cells in the human brain. They gave the cells cortisol to measure the effect on neurogenesis and found that a protein called SGK1 was important in mediating the effects of stress hormones on neurogenesis and on the activity of the GR.
By measuring the effect of cortisol over time, they found that increased levels of SGK1 prolong the detrimental effects of stress hormones on neurogenesis. Specifically, SGK1 enhances and maintains the long-term effect of stress hormones, by keeping the GR active even after cortisol had been washed out of the cells.
Next, the researchers used a pharmacological compound (GSK650394) known to inhibit SGK1, and found they were able to block the detrimental effects of stress hormones and ultimately increase the number of new brain cells.
Finally, the research team were able to confirm these findings by studying levels of SGK1 in animal models and human blood samples of 25 drug-free depressed patients.
Dr Christoph Anacker, from King’s College London’s Institute of Psychiatry and first author of the paper, says: “Because a reduction of neurogenesis is considered part of the process leading to depression, targeting the molecular pathways that regulate this process may be a promising therapeutic strategy. This novel mechanism may be particularly important for the effects of chronic stress on mood, and ultimately depressive symptoms. Pharmacological interventions aimed at reducing the levels of SGK1 in depressed patients may therefore be a potential strategy for future antidepressant treatments.”
A study by Stephanie Cosentino, Ph.D., of Columbia University, New York, and colleagues examines the relationship between families with exceptional longevity and cognitive impairment consistent with Alzheimer disease.
The cross-sectional study included a total of 1,870 individuals (1,510 family members and 360 spouse controls) recruited through the Long Life Family Study. The main outcome measure was the prevalence of cognitive impairment based on a diagnostic algorithm validated using the National Alzheimer’s Coordinating Center data set.
According to study results, the cognitive algorithm classified 546 individuals (38.5 percent) as having cognitive impairment consistent with Alzheimer disease. Long Life Family Study probands had a slightly but not statistically significant reduced risk of cognitive impairment compared with spouse controls (121 of 232 for probands versus 45 of 103 for spouse controls), whereas Long Life Family Study sons and daughters had a reduced risk of cognitive impairment (11 of 213 for sons and daughters versus 28 of 216 for spouse controls). Restriction to nieces and nephews in the offspring generation attenuated this association (37 of 328 for nieces and nephews versus 28 of 216 for spouse controls).
"Overall, our results appear to be consistent with a delayed onset of disease in long-lived families, such that individuals who are part of exceptionally long-lived families are protected but not later in life," the study concludes.
A new study from investigators at the Benson-Henry Institute for Mind/Body Medicine at Massachusetts General Hospital and Beth Israel Deaconess Medical Center finds that eliciting the relaxation response—a physiologic state of deep rest induced by practices such as meditation, yoga, deep breathing and prayer—produces immediate changes in the expression of genes involved in immune function, energy metabolism and insulin secretion.

“Many studies have shown that mind/body interventions like the relaxation response can reduce stress and enhance wellness in healthy individuals and counteract the adverse clinical effects of stress in conditions like hypertension, anxiety, diabetes and aging,” said Herbert Benson, HMS professor of medicine at Mass General and co-senior author of thereport.
Benson is director emeritus of the Benson-Henry Institute.
“Now for the first time we’ve identified the key physiological hubs through which these benefits might be induced,” he said.
Published in the open-access journal PLOS ONE, the study combined advanced expression profiling and systems biology analysis to both identify genes affected by relaxation response practice and to determine the potential biological relevance of those changes.
“Some of the biological pathways we identify as being regulated by relaxation response practice are already known to play specific roles in stress, inflammation and human disease. For others, the connections are still speculative, but this study is generating new hypotheses for further investigation,” said Towia Libermann, HMS associate professor of medicine at Beth Israel Deaconess and co-senior author of the study.
Benson first described the relaxation response—the physiologic opposite of the fight-or-flight response—almost 40 years ago, and his team has pioneered the application of mind/body techniques to a wide range of health problems. Studies in many peer-reviewed journals have documented how the relaxation response both alleviates symptoms of anxiety and many other disorders and also affects factors such as heart rate, blood pressure, oxygen consumption and brain activity.
In 2008, Benson and Libermann led a study finding that long-term practice of the relaxation response changed the expression of genes involved with the body’s response to stress. The current study examined changes produced during a single session of relaxation response practice, as well as those taking place over longer periods of time.
The study enrolled a group of 26 healthy adults with no experience in relaxation response practice, who then completed an 8-week relaxation-response training course.
Before they started their training, they went through what was essentially a control group session: Blood samples were taken before and immediately after the participants listened to a 20-minute health education CD and again 15 minutes later. After completing the training course, a similar set of blood tests was taken before and after participants listened to a 20-minute CD used to elicit the relaxation response as part of daily practice.
The sets of blood tests taken before the training program were designated “novice,” and those taken after training completion were called “short-term practitioners.” For further comparison, a similar set of blood samples was taken from a group of 25 individuals with 4 to 25 years’ experience regularly eliciting the relaxation response through many different techniques before and after they listened to the same relaxation response CD.
Blood samples from all participants were analyzed to determine the expression of more than 22,000 genes at the different time points.
The results revealed significant changes in the expression of several important groups of genes between the novice samples and those from both the short- and long-term sets. Even more pronounced changes were shown in the long-term practitioners.
A systems biology analysis of known interactions among the proteins produced by the affected genes revealed that pathways involved with energy metabolism, particularly the function of mitochondria, were upregulated during the relaxation response. Pathways controlled by activation of a protein called NF-κB—known to have a prominent role in inflammation, stress, trauma and cancer—were suppressed after relaxation response elicitation. The expression of genes involved in insulin pathways was also significantly altered.
“The combination of genomics and systems biology in this study provided great insight into the key molecules and physiological gene interaction networks that might be involved in relaying beneficial effects of relaxation response in healthy subjects,” said Manoj Bhasin, HMS assistant professor of medicine, co-lead author of the study, and co-director of the Beth Israel Deaconess Genomics, Proteomics, Bioinformatics and Systems Biology Center.
Bhasin noted that these insights should provide a framework for determining, on a genomic basis, whether the relaxation response will help alleviate symptoms of diseases triggered by stress. The work could also lead to developing biomarkers that may suggest how individual patients will respond to interventions.
Benson stressed that the long-term practitioners in this study elicited the relaxation response through many different techniques—various forms of meditation, yoga or prayer—but those differences were not reflected in the gene expression patterns.
“People have been engaging in these practices for thousands of years, and our finding of this unity of function on a basic-science, genomic level gives greater credibility to what some have called ‘new age medicine,’ ” he said.
“While this and our previous studies focused on healthy participants, we currently are studying how the genomic changes induced by mind/body interventions affect pathways involved in hypertension, inflammatory bowel disease and irritable bowel syndrome. We have also started a study—a collaborative undertaking between Dana-Farber Cancer Institute, Mass General and Beth Israel Deaconess—in patients with precursor forms of multiple myeloma, a condition known to involve activation of NF-κB pathways,” said Libermann, who is the director of the Beth Israel Deaconess Medical Center Genomics, Proteomics, Bioinformatics and Systems Biology Center.
Epilepsy that does not respond to drugs can be halted in adult mice by transplanting a specific type of cell into the brain, UC San Francisco researchers have discovered, raising hope that a similar treatment might work in severe forms of human epilepsy.
UCSF scientists controlled seizures in epileptic mice with a one-time transplantation of medial ganglionic eminence (MGE) cells, which inhibit signaling in overactive nerve circuits, into the hippocampus, a brain region associated with seizures, as well as with learning and memory. Other researchers had previously used different cell types in rodent cell transplantation experiments and failed to stop seizures.
Cell therapy has become an active focus of epilepsy research, in part because current medications, even when effective, only control symptoms and not underlying causes of the disease, according to Scott C. Baraban, PhD, who holds the William K. Bowes Jr. Endowed Chair in Neuroscience Research at UCSF and led the new study. In many types of epilepsy, he said, current drugs have no therapeutic value at all.
“Our results are an encouraging step toward using inhibitory neurons for cell transplantation in adults with severe forms of epilepsy,” Baraban said. “This procedure offers the possibility of controlling seizures and rescuing cognitive deficits in these patients.”
The findings, which are the first ever to report stopping seizures in mouse models of adult human epilepsy, will be published online May 5 in the journal Nature Neuroscience.
During epileptic seizures, extreme muscle contractions and, often, a loss of consciousness can cause seizure sufferers to lose control, fall and sometimes be seriously injured. The unseen malfunction behind these effects is the abnormal firing of many excitatory nerve cells in the brain at the same time.
In the UCSF study, the transplanted inhibitory cells quenched this synchronous, nerve-signaling firestorm, eliminating seizures in half of the treated mice and dramatically reducing the number of spontaneous seizures in the rest. Robert Hunt, PhD, a postdoctoral fellow in the Baraban lab, guided many of the key experiments.
In another encouraging step, UCSF researchers reported May 2 that they found a way to reliably generate human MGE-like cells in the laboratory, and that, when transplanted into healthy mice,the cells similarly spun off functional inhibitory nerve cells. That research can be found online in the journal Cell Stem Cell.
In many forms of epilepsy, loss or malfunction of inhibitory nerve cells within the hippocampus plays a critical role. MGE cells are progenitor cells that form early within the embryo and are capable of generating mature inhibitory nerve cells called interneurons. In the Baraban-led UCSF study, the transplanted MGE cells from mouse embryos migrated and generated interneurons, in effect replacing the cells that fail in epilepsy. The new cells integrated into existing neural circuits in the mice, the researchers found.
“These cells migrate widely and integrate into the adult brain as new inhibitory neurons,” Baraban said. “This is the first report in a mouse model of adult epilepsy in which mice that already were having seizures stopped having seizures after treatment.”
The mouse model of disease that Baraban’s lab team worked with is meant to resemble a severe and typically drug-resistant form of human epilepsy called mesial temporal lobe epilepsy, in which seizures are thought to arise in the hippocampus. In contrast to transplants into the hippocampus, transplants into the amygdala, a brain region involved in memory and emotion, failed to halt seizure activity in this same mouse model, the researcher found.
Temporal lobe epilepsy often develops in adolescence, in some cases long after a seizure episode triggered during early childhood by a high fever. A similar condition in mice can be induced with a chemical exposure, and in addition to seizures, this mouse model shares other pathological features with the human condition, such as loss of cells in the hippocampus, behavioral alterations and impaired problem solving.
In the Nature Neuroscience study, in addition to having fewer seizures, treated mice became less abnormally agitated, less hyperactive, and performed better in water-maze tests.
New research from Bristol and Cardiff universities shows that children whose brains process information more slowly than their peers are at greater risk of psychotic experiences.

These can include hearing voices, seeing things that are not present or holding unrealistic beliefs that other people don’t share. These experiences can often be distressing and frightening and interfere with their everyday life.
Children with psychotic experiences are more likely to develop psychotic illnesses like schizophrenia later in life.
Using data gathered from 6,784 participants in Children of the 90s, researchers from the MRC Centre for Neuropsychiatric Genetics and Genomics in Cardiff University and the School of Social and Community Medicine in the University of Bristol examined whether performance in a number of cognitive tests conducted at ages 8, 10 and 11 was related to the risk of having psychotic experiences at age 12.
The tests assessed how quickly the children could process information, as well as their attention, memory, reasoning, and ability to solve problems.
Among those interviewed, 787 (11.6 per cent) had suspected or definite psychotic experiences at age 12. Children that scored less well in the various tests at the ages of 8, 10 and 11 were more likely to have psychotic experiences at age 12.
This was particularly the case for the test that assessed how quickly the children processed information. Furthermore, children whose speed of processing information became slower between ages 8 and 11 had greater risk of having psychotic experiences at age 12.
These findings did not change when other factors, including the parent’s psychiatric history and the children’s own developmental delay, were taken into account. The study’s findings could have important implications for identifying children at risk of psychosis, with the benefit of early treatment.
Speaking about the findings, lead author and PhD student, Miss Maria Niarchou from Cardiff University’s School of Medicine, said:
‘Previous research has shown a link between the slowing down of information processing and schizophrenia and this was found to be at least in part the result of anti-psychotic medication.
‘However, this study shows that impaired information processing speed can already be present in childhood and associated with higher risk of psychotic experiences, irrespective of medication.
‘Our findings improve our understanding of the brain processes that are associated with high risk of psychotic experiences in childhood and in turn high risk of psychotic disorder later in life.’
Senior author, Dr Marianne van den Bree of Cardiff University’s School of Medicine, said:
‘Schizophrenia is a complex and relatively rare mental health condition, occurring at a rate of 1 per cent in the general population. Not every child with impaired information processing speed is at risk of psychosis later in life. Further research is needed to determine whether interventions to improve processing speed in at-risk children can lead to decreased transition to psychotic disorders.’
Ruth Coombs, Manager for Influence and Change at Mind Cymru, said:
‘This is a very interesting piece of research, which could help young people at risk of developing mental health problems in later life build resilience and benefit from early intervention. It is important to remember that people can and do recover from mental health problems and we also welcome further research which supports resilience building in young people.’
"I’ve been in a crowded elevator with mirrors all around, and a woman will move and I’ll go to get out the way and then realise: ‘oh that woman is me’."
Heather Sellers has prosopagnosia, more commonly known as face blindness. “I can’t remember any image of the human face. It’s simply not special to me,” she says. “I don’t process them like I do a car or a dog. It’s not a visual problem, it’s a perception problem.”

Heather knew from a young age that something was different about the way she navigated her world, but her condition wasn’t diagnosed until she was in her 30s. “I always knew something was wrong – it was impossible for me to trust my perceptions of the world. I was diagnosed as anxious. My parents thought I was crazy.”
The condition is estimated to affect around 2.5 per cent of the population, and it’s common for those who have it not to realise that anything is wrong. “In many ways it’s a subtle disorder,” says Heather. “It’s easy for your brain to compensate because there are so many other things you can use to identify a person: hair colour, gait or certain clothes. But meet that person out of context and it’s socially devastating.”
As a child, she was once separated from her mum at a grocery store. Store staff reunited the pair, but it was confusing for Heather, since she didn’t initially recognise her mother. “But I didn’t know that I wasn’t recognising her.”
Chaos explained
Heather was 36 when she stumbled across the phrase face blindness in a psychology textbook. “When I saw those two words I knew instantly that was exactly what I had – that explained all the chaos.”
She found her way to Harvard neuroscientist Brad Duchaine who diagnosed her as having one of the three worst cases of the disorder that he had ever seen.
So what’s it like to not recognise anyone you know? Heather says the biggest difficulty with the disorder is recognising people who she is close to – the people that are most important to recognise. In the school where she teaches English she is fine, because she recognises people by their clothes or hair and asks her students to wear name badges.
But it can be harder in social settings. Once she went up to the wrong person at a party and put her arm around him thinking he was her partner. And at college men would phone her angry that she had walked straight past them after they had had a date. “At the time I was thinking ‘I didn’t see you, why is everyone making my life so difficult?’”
It’s not just other people Heather doesn’t recognise – she can’t identify her own face either. “A few times I have been in a crowded elevator with mirrors all around and a woman will move, and I will go to get out the way and then realise ‘oh that woman is me’.” She also finds it unsettling to see photos and not recognise herself in them.
Face processing
To try and understand the condition, Duchaine and his colleagues recorded brain activity while 12 people with prosopagnosia looked at famous and non-famous faces. The team found that part of the brain responsible for stored visual memory was activated in six people when they saw the famous faces.
But another component of brain activity thought to represent a later stage of face processing wasn’t triggered. “Some part of their brain was recognising the face,” says Duchaine, but the brain was failing to pass this information into higher-level consciousness (Brain).
"There may be training where we give people feedback and say ‘look you recognise that face even though you’re not aware of it’," says Duchaine.
Now Zaira Cattaneo at the University of Milano-Bicocca in Italy and colleagues have identified the specific brain areas that allow us to recognise our friends. The team used transcranial magnetic stimulation to block two vital aspects of face processing in people without prosopagnosia. Targeting the left prefrontal cortex blocked the ability to distinguish individual features like the nose and eyes, and blocking the right prefrontal cortex impaired the ability to distinguish the location of those features from one another (NeuroImage).
"We made performance worse," says Cattaneo. "We want to make it better." Now the team are trying to activate these areas of the brain. "The aim is to enhance face recognition abilities by directly modulating excitability in the prefrontal cortices," says Cattaneo.
Would Heather want a cure, should one be found? “I can’t imagine what you see when you see a face, and it’s scary,” she says. “I go back and forth on what I’d do. I’ve done so much work in figuring out how to chart my world, I’d need to do a whole new rewrite. But it would be fascinating.”

Eye See You: Composites of hard and soft materials and circuits make up an electronic version of an insect’s compound eye.
New “insect eye” cameras could someday help flying drones see into every corner of a battlefield or give tiny medical scopes an all-around view inside the human body. A team of researchers from the United States has constructed such a camera, which offers an almost 180-degree field of view using hundreds of tiny lenses.
The centimeter-wide digital camera has 180 microlenses—roughly what fire ants or bark beetles have in their compound eyes—placed on a hemispherical array. Researchers hope their design will eventually lead to insect-eye cameras that exceed even nature’s blueprints, according to a report in the 2 May issue of the journal Nature.
“We think of the insect world as an inspiration for design, but we’re not constrained by it,” says John Rogers, a physical chemist and materials engineer at the University of Illinois at Urbana-Champaign. “It’s not biomimicry; it’s bioinspiration.”
Biological insect eyes consist of hundreds or thousands of the tiny units, each having a lens, pigment, and photoreceptors. Each unit’s lens is mounted on a transparent crystalline cone that pipes light down to the photoreceptors. Black pigment isolates each of the eye units and screens out background light.

Biomimicry: The 160-degree, 180-pixel eye is inspired by an insect’s compound eye.
Nature’s design offers two huge advantages over that of ordinary cameras. First, the hemispherical shape allows for extremely wide-angle fields of view. Second, the hemispherical array of tiny lenses has an almost infinite depth of field, which keeps objects in focus regardless of their distance from the camera.
But camera chips aren’t usually shaped like fly eyes. Researchers faced the tricky task of bending the camera into a hemispherical shape without distorting the image created by each lens or ruining the electronics beneath the tiny lenses. Their solution “relies on composites of hard and soft materials in strategic layouts that allow stretching and bending and flexing to go from planar [flat] to hemispherical form,” Rogers says.
Rogers and his colleagues put the tiny lenses on top of columns connected to a flexible base membrane—all made from elastomeric polydimethylsiloxane material, which is also used in contact lenses. Each supporting cylindrical post protected its lens from any bending or stretching in the base membrane.
The array of tiny lenses sat on a second layer of stretchable silicon photodiodes that converted the focused light from the lenses into current or voltage. Tiny serpentine wires connected the array of photodiodes with the other electronics.
A third, “black matrix” layer sat on top of both the lens layer and the photodiode layer to act as the shield against background light. The black pigment of real insect eyes can adjust in real time to changing light conditions, but the artificial camera version must use software to make such adjustments.
The design allowed researchers to freely inflate the flat layers into the final hemispherical shape—a camera with a 160-degree field of view. (The prototype camera’s array of lenses didn’t quite stretch all the way to the edge of the hemispherical shape.)
A next step could involve figuring out how to dynamically “tune” the inflated shape of the camera, says Rogers. He has also challenged his team to try inflating the camera shape into an almost full spherical shape—he envisions flexible camera designs based on the different compound eyes of other creatures, such as lobsters and shrimp (reflecting superposition eyes), moths and lacewings (refracting superposition eyes), and houseflies (neural superposition eyes).
The insect-eye camera depends on each individual unit to contribute 1 pixel of resolution. A 180-pixel-resolution camera may not do much right now, but the camera design can scale up its resolution by adding more units to the overall array. Rogers anticipates making camera designs with better resolution than the eyes of praying mantises (15 000 eye units) and dragonflies (28 000 eye units).
The technology won’t likely be used in consumer digital cameras any time soon. But the insect-eye cameras could be used in medical devices, such as endoscopes, which give physicians a look inside the human body. Alexander Borst, director of the Max Planck Institute of Neurobiology, in Germany, envisions commercial versions of the cameras within the next year or two.
Such cameras may also prove useful for small drones to explore disaster areas such as those left behind by the Chernobyl and Fukushima nuclear disasters, Borst says. He was not involved in the latest research but hopes to work with Rogers and his colleagues to put the insect-eye camera to use in a robo-fly developed at his institution.
A key type of human brain cell developed in the laboratory grows seamlessly when transplanted into the brains of mice, UC San Francisco researchers have discovered, raising hope that these cells might one day be used to treat people with Parkinson’s disease, epilepsy, and possibly even Alzheimer’s disease, as well as and complications of spinal cord injury such as chronic pain and spasticity.

“We think this one type of cell may be useful in treating several types of neurodevelopmental and neurodegenerative disorders in a targeted way,” said Arnold Kriegstein, MD, PhD, director of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF and co-lead author on the paper.
The researchers generated and transplanted a type of human nerve-cell progenitor called the medial ganglionic eminence (MGE) cell, in experiments described in the May 2 edition of Cell Stem Cell. Development of these human MGE cells within the mouse brain mimics what occurs in human development, they said.
Kriegstein sees MGE cells as a potential treatment to better control nerve circuits that become overactive in certain neurological disorders. Unlike other neural stem cells that can form many cell types — and that may potentially be less controllable as a consequence — most MGE cells are restricted to producing a type of cell called an interneuron. Interneurons integrate into the brain and provide controlled inhibition to balance the activity of nerve circuits.
To generate MGE cells in the lab, the researchers reliably directed the differentiation of human pluripotent stem cells — either human embryonic stem cells or induced pluripotent stem cells derived from human skin. These two kinds of stem cells have virtually unlimited potential to become any human cell type. When transplanted into a strain of mice that does not reject human tissue, the human MGE-like cells survived within the rodent forebrain, integrated into the brain by forming connections with rodent nerve cells, and matured into specialized subtypes of interneurons.
These findings may serve as a model to study human diseases in which mature interneurons malfunction, according to Kriegstein. The researchers’ methods may also be used to generate vast numbers of human MGE cells in quantities sufficient to launch potential future clinical trials, he said.
Kriegstein was a co-leader of the research, along with Arturo Alvarez-Buylla, PhD, UCSF professor of neurological surgery; John Rubenstein, MD, PhD, UCSF professor of psychiatry; and UCSF postdoctoral scholars Cory Nicholas, PhD, and Jiadong Chen, PhD.
Nicholas utilized key growth factors and other molecules to direct the derivation and maturation of the human MGE-like interneurons. He timed the delivery of these factors to shape their developmental path and confirmed their progression along this path. Chen used electrical measurements to carefully study the physiological and firing properties of the interneurons, as well as the formation of synapses between neurons.
Previously, UCSF researchers led by Allan Basbaum, PhD, chair of anatomy at UCSF, have used mouse MGE cell transplantation into the mouse spinal cord to reduce neuropathic pain, a surprising application outside the brain. Kriegstein, Nicholas and colleagues now are exploring the use of human MGE cells in mouse models of neuropathic pain and spasticity, Parkinson’s disease and epilepsy.
“The hope is that we can deliver these cells to various places within the nervous system that have been overactive and that they will functionally integrate and provide regulated inhibition,” Nicholas said.
The researchers also plan to develop MGE cells from induced pluripotent stem cells derived from skin cells of individuals with autism, epilepsy, schizophrenia and Alzheimer’s disease, in order to investigate how the development and function of interneurons might become abnormal — creating a lab-dish model of disease.
One mystery and challenge to both the clinical and pre-clinical study of human MGE cells is that they develop at a slower, human pace, reflecting an “intrinsic clock”. In fast-developing mice, the human MGE-like cells still took seven to nine months to form interneuron subtypes that normally are present near birth.
“If we could accelerate the clock in human cells, then that would be very encouraging for various applications,” Kriegstein said.
Scientists at Princeton University used off-the-shelf printing tools to create a functional ear that can “hear” radio frequencies far beyond the range of normal human capability.

The researchers’ primary purpose was to explore an efficient and versatile means to merge electronics with tissue. The scientists used 3D printing of cells and nanoparticles followed by cell culture to combine a small coil antenna with cartilage, creating what they term a bionic ear.
"In general, there are mechanical and thermal challenges with interfacing electronic materials with biological materials," said Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton and the lead researcher. "Previously, researchers have suggested some strategies to tailor the electronics so that this merger is less awkward. That typically happens between a 2D sheet of electronics and a surface of the tissue. However, our work suggests a new approach — to build and grow the biology up with the electronics synergistically and in a 3D interwoven format."
McAlpine’s team has made several advances in recent years involving the use of small-scale medical sensors and antenna. Last year, a research effort led by McAlpine and Naveen Verma, an assistant professor of electrical engineering, and Fio Omenetto of Tufts University, resulted in the development of a “tattoo” made up of a biological sensor and antenna that can be affixed to the surface of a tooth.
This project, however, is the team’s first effort to create a fully functional organ: one that not only replicates a human ability, but extends it using embedded electronics
"The design and implementation of bionic organs and devices that enhance human capabilities, known as cybernetics, has been an area of increasing scientific interest," the researchers wrote in the article which appears in the scholarly journal Nano Letters. “This field has the potential to generate customized replacement parts for the human body, or even create organs containing capabilities beyond what human biology ordinarily provides.”
Standard tissue engineering involves seeding types of cells, such as those that form ear cartilage, onto a scaffold of a polymer material called a hydrogel. However, the researchers said that this technique has problems replicating complicated three dimensional biological structures. Ear reconstruction “remains one of the most difficult problems in the field of plastic and reconstructive surgery,” they wrote.
To solve the problem, the team turned to a manufacturing approach called 3D printing. These printers use computer-assisted design to conceive of objects as arrays of thin slices. The printer then deposits layers of a variety of materials – ranging from plastic to cells – to build up a finished product. Proponents say additive manufacturing promises to revolutionize home industries by allowing small teams or individuals to create work that could previously only be done by factories.
Creating organs using 3D printers is a recent advance; several groups have reported using the technology for this purpose in the past few months. But this is the first time that researchers have demonstrated that 3D printing is a convenient strategy to interweave tissue with electronics.
The technique allowed the researchers to combine the antenna electronics with tissue within the highly complex topology of a human ear. The researchers used an ordinary 3D printer to combine a matrix of hydrogel and calf cells with silver nanoparticles that form an antenna. The calf cells later develop into cartilage.
Manu Mannoor, a graduate student in McAlpine’s lab and the paper’s lead author, said that additive manufacturing opens new ways to think about the integration of electronics with biological tissue and makes possible the creation of true bionic organs in form and function. He said that it may be possible to integrate sensors into a variety of biological tissues, for example, to monitor stress on a patient’s knee meniscus.
David Gracias, an associate professor at Johns Hopkins and co-author on the publication, said that bridging the divide between biology and electronics represents a formidable challenge that needs to be overcome to enable the creation of smart prostheses and implants.
"Biological structures are soft and squishy, composed mostly of water and organic molecules, while conventional electronic devices are hard and dry, composed mainly of metals, semiconductors and inorganic dielectrics," he said. "The differences in physical and chemical properties between these two material classes could not be any more pronounced."
The finished ear consists of a coiled antenna inside a cartilage structure. Two wires lead from the base of the ear and wind around a helical “cochlea” – the part of the ear that senses sound – which can connect to electrodes. Although McAlpine cautions that further work and extensive testing would need to be done before the technology could be used on a patient, he said the ear in principle could be used to restore or enhance human hearing. He said electrical signals produced by the ear could be connected to a patient’s nerve endings, similar to a hearing aid. The current system receives radio waves, but he said the research team plans to incorporate other materials, such as pressure-sensitive electronic sensors, to enable the ear to register acoustic sounds.
In addition to McAlpine, Verma, Mannoor and Gracias the research team includes: Winston Soboyejo, a professor of mechanical and aerospace engineering at Princeton; Karen Malatesta, a faculty fellow in molecular biology at Princeton; Yong Lin Kong, a graduate student in mechanical and aerospace engineering at Princeton; and Teena James, a graduate student in chemical and biomolecular engineering at Johns Hopkins.
The team also included Ziwen Jiang, a high school student at the Peddie School in Hightstown who participated as part of an outreach program for young researchers in McAlpine’s lab.
"Ziwen Jiang is one of the most spectacular high school students I have ever seen," McAlpine said. "We would not have been able to complete this project without him, particularly in his skill at mastering CAD designs of the bionic ears."
Mathematicians from Queen Mary, University of London will bring researchers one-step closer to understanding how the structure of the brain relates to its function in two recently published studies.

Publishing in Physical Review Letters the researchers from the Complex Networks group at Queen Mary’s School of Mathematical Sciences describe how different areas in the brain can have an association despite a lack of direct interaction.
The team, in collaboration with researchers in Barcelona, Pamplona and Paris, combined two different human brain networks - one that maps all the physical connections among brain areas known as the backbone network, and another that reports the activity of different regions as blood flow changes, known as the functional network. They showed that the presence of symmetrical neurons within the backbone network might be responsible for the synchronised activity of physically distant brain regions.
Lead author Vincenzo Nicosia, said “We don’t fully understand how the human brain works. So far the focus has been more on the analysis of the function of single, localised regions. However, there isn’t a complete model that brings the whole functionality of the brain together. Hopefully, our research will help neuroscientists to develop a more accurate map of the brain and investigate its functioning beyond single areas.”
The research adds to the recent findings published in Proceedings of the National Academy of Sciences in which the QM researchers along with the Department of Psychiatry at University of Cambridge analysed the development of the brain of a small worm called Caenorhabditis elegans. In this paper, the team examined the number of links formed in the brain during the worm’s lifespan, and observed an unexpected abrupt change in the pattern of growth, corresponding with the time of egg hatching.
“The research is important as it’s the first time that a sharp transition in the growth of a neural network has ever been observed,” added Dr Nicosia.
“Although we don’t know which biological factors are responsible for the change in the growth pattern, we were able to reproduce the pattern using a simple economical model of synaptic formation. This result can pave the way to a deeper understanding of how neural networks grow in more complex organisms.”
Using a kid-friendly robot during behavioral therapy sessions may help some children with autism gain better social skills, a preliminary study suggests.

The study, of 19 children with autism spectrum disorders (ASDs), found that kids tended to do better when their visit with a therapist included a robot “co-therapist.” On average, they made bigger gains in social skills such as asking “appropriate” questions, answering questions and making conversational comments.
So-called humanoid robots are already being marketed for this purpose, but there has been little research to back it up.
"Going into this study, we were skeptical," said lead researcher Joshua Diehl, an assistant professor of psychology at the University of Notre Dame in Indiana, who said he has no financial interest in the technology.
"We found that, to our surprise, the kids did better when the robot was added," he said.
There are still plenty of caveats, however, said Diehl, who is presenting his team’s findings Saturday at the International Meeting for Autism Research (IMFAR) in San Sebastian, Spain.
For one, the study was small. And it’s not clear that the results seen in a controlled research setting would be the same in the real world of therapists’ offices, according to Diehl.
"I’d say this is not yet ready for prime time," he said.
ASDs are a group of developmental disorders that affect a person’s ability to communicate and interact socially. The severity of those effects range widely: Some people have mild problems socializing, but have normal to above-normal intelligence; some people have profound difficulties relating to others, and may have intellectual impairment as well.
Experts have become interested in using technology — from robots to iPads — along with standard ASD therapies because it may help bridge some of the communication issues kids have.
Human communication is complex and unpredictable, with body language, facial expressions and other subtle cues coming into the mix, explained Geraldine Dawson, chief science officer for the advocacy group Autism Speaks.
A robot or a computer game, on the other hand, can be programmed to be simple and predictable, and that may help kids with ASDs better process the information they are being given, Dawson said.
"Broadly speaking," she said, "we are very excited about the potential role for technology in diagnosing and treating ASDs." But she also agreed with Diehl that the findings are "very preliminary," and that researchers have a lot more to learn about how technology — robots or otherwise — fits into ASD therapies.
For the study, Diehl’s team used a humanoid robot manufactured by Aldebaran Robotics, which markets the NAO robot for use in education, including special education for kids with ASDs. The robot, which stands at about 2 feet tall, looks like a toy but it’s priced more like a small car, Diehl noted.
The NAO H25 “Academic Edition” rings up at about $16,000. (Diehl said the study was funded by government and private grants, not the manufacturer.)
The researchers had 19 kids aged 6 to 13 complete 12 behavioral therapy sessions, where a therapist worked with the child on social skills. Half of the sessions involved the robot, named Kelly, which was wheeled out so the child could practice conversing with her, while the therapist stood by.
"So the child might say, ‘Hi Kelly, how are you?’" Diehl explained. "Then Kelly would say, ‘Fine. What did you do today?’" During the non-Kelly sessions, another person entered the room and carried on the same conversation with the child that the robot would have.
On average, Diehl’s team found, kids made bigger gains from the sessions that included Kelly — based on both their interactions with their therapists, and their parents’ reports.
"There was one child who, when his dad came home from work, asked him how his day was," Diehl said. "He’d never done that before."
Still, he stressed that while the robot sessions seemed more successful on average, the children varied widely in their responses to Kelly. Going forward, Diehl said, it will be important to figure out whether there are certain kids with ASDs more likely to benefit from a robot co-therapist.
Dawson agreed that there is no one-size-fits-all ASD therapy. “Any therapy for a person with an ASD has to be individualized,” she said. The idea with any technology, she added, is to give therapists and doctors extra “tools” to work with.
A separate study presented at the same meeting looked at another type of tool. Researchers had 60 “minimally verbal” children with ASDs attend two “play-based” sessions per week, aimed at boosting their ability to speak and gesture. Half of the kids were also given a “speech-generating device,” like an iPad.
Three and six months later, children who worked with the devices were able to say more words and were quicker to take up conversational skills.
Dawson said the robot and iPad studies are just part of the growing body of research into how technology can not only aid in ASD therapies, but also help doctors diagnose the disorders or help parents manage at home.
But both Diehl and Dawson stressed that no robot or iPad is intended to stand in for human connection. The idea, after all, is to enhance kids’ ability to communicate and have relationships, Dawson noted. “Technology will never take the place of people,” she said.
The data and conclusions of research presented at meetings should be viewed as preliminary until published in a peer-reviewed journal.
When children with conduct problems see images of others in pain, key parts of their brains don’t react in the way they do in most people. This pattern of reduced brain activity upon witnessing pain may serve as a neurobiological risk factor for later adult psychopathy, say researchers who report their findings in the Cell Press journal Current Biology on May 2.

(Image: Shutterstock)
That’s not to say that all children with conduct problems are the same, or that all children showing this brain pattern in young life will become psychopaths. The researchers emphasize that many children with conduct problems do not persist with their antisocial behavior.
"Our findings indicate that children with conduct problems have an atypical brain response to seeing other people in pain," says Essi Viding of University College London. "It is important to view these findings as an indicator of early vulnerability, rather than biological destiny. We know that children can be very responsive to interventions, and the challenge is to make those interventions even better, so that we can really help the children, their families, and their wider social environment."
Conduct problems represent a major societal problem and include physical aggression, cruelty to others, and a lack of empathy, or “callousness.” In the United Kingdom, where the study was conducted, about five percent of children qualify for a diagnosis of conduct problems. But very little is known about the underlying biology.
In the new study, Viding, Patricia Lockwood, and their colleagues scanned children’s brains by functional magnetic resonance imaging (fMRI) to see how those with conduct problems differ in their response to viewing images of others in pain.
The brain images showed that, relative to controls, children with conduct problems show reduced responses to others’ pain specifically in regions of the brain known to play a role in empathy. The researchers also saw variation among those with conduct problems, with those deemed to be more callous showing lower brain activation than less callous individuals.
"Our findings very clearly point to the fact that not all children with conduct problems share the same vulnerabilities; some may have neurobiological vulnerability to psychopathy, while others do not," Viding says. "This raises the possibility of tailoring existing interventions to suit the specific profile of atypical processing that characterizes a child with conduct problems."
To obtain very-high-resolution 3D images of the cerebral vascular system, a dye is used that fluoresces in the near infrared and can pass through the skin. The Lem-PHEA chromophore, a new product outclassing the best dyes, has been synthesized by a team from the Laboratoire de Chimie (CNRS/ENS de Lyon/Université Claude Bernard Lyon 1). Conducted in collaboration with researchers from the Institut des Neurosciences (Université Joseph Fourier - Grenoble/CEA/Inserm/CHU) and the Laboratoire Chimie et Interdisciplinarité: Synthèse, Analyse, Modélisation (CNRS /Université de Nantes), this work has been published online in the journal Chemical Science. It opens up significant prospects for better observing the brain and understanding how it works.
Different cerebral imaging techniques, such as two-photon microscopy or magnetic resonance imaging (MRI), contribute to our understanding of how the healthy or diseased brain works. One of their essential characteristics is their spatial resolution, in other words the dimension of the smallest details observable by each technique. Typically, for MRI, this resolution is limited to several millimeters, which does not make it possible to obtain images such as the one below, whose resolution is of the order of a micrometer.

To obtain such images of the vascular system of a mouse brain, it is necessary to use a fluorescent dye that combines several properties: luminescence in the near infrared, solubility in biological media, low cost, non-toxicity and suitable for 3D imaging (two-photon absorption). The researchers have developed a new product, Lem-PHEA, which combines these properties and is easy to synthesize. When injected into the blood vessels of a mouse, it has revealed details of the rodent’s vascular system with previously unattained precision, thanks to a considerably enhanced fluorescence compared to “conventional” dyes (such as Rhodamine-B and cyanine derivatives). With Lem-PHEA, the researchers have obtained more contrasted images (in terms of brilliance) than with these standard dyes. Finally, the product is easily eliminated by the kidneys and no toxic residues have been found in the liver. These results pave the way for a better understanding of the working of the brain.
Medical researchers have manipulated human stem cells into producing types of brain cells known to play important roles in neurodevelopmental disorders such as epilepsy, schizophrenia and autism. The new model cell system allows neuroscientists to investigate normal brain development, as well as to identify specific disruptions in biological signals that may contribute to neuropsychiatric diseases.
Scientists from The Children’s Hospital of Philadelphia and the Sloan-Kettering Institute for Cancer Research led a study team that described their research in the journal Cell Stem Cell, published online today.
The research harnesses human embryonic stem cells (hESCs), which differentiate into a broad range of different cell types. In the current study, the scientists directed the stem cells into becoming cortical interneurons—a class of brain cells that, by releasing the neurotransmitter GABA, controls electrical firing in brain circuits.
"Interneurons act like an orchestra conductor, directing other excitatory brain cells to fire in synchrony," said study co-leader Stewart A. Anderson, M.D., a research psychiatrist at The Children’s Hospital of Philadelphia. "However, when interneurons malfunction, the synchrony is disrupted, and seizures or mental disorders can result."
Anderson and study co-leader Lorenz Studer, M.D., of the Center for Stem Cell Biology at Sloan-Kettering, derived interneurons in a laboratory model that simulates how neurons normally develop in the human forebrain.
"Unlike, say, liver diseases, in which researchers can biopsy a section of a patient’s liver, neuroscientists cannot biopsy a living patient’s brain tissue," said Anderson. Hence it is important to produce a cell culture model of brain tissue for studying neurological diseases. Significantly, the human-derived cells in the current study also "wire up" in circuits with other types of brain cells taken from mice, when cultured together. Those interactions, Anderson added, allowed the study team to observe cell-to-cell signaling that occurs during forebrain development.
In ongoing studies, Anderson explained, he and colleagues are using their cell model to better define molecular events that occur during brain development. By selectively manipulating genes in the interneurons, the researchers seek to better understand how gene abnormalities may disrupt brain circuitry and give rise to particular diseases. Ultimately, those studies could help inform drug development by identifying molecules that could offer therapeutic targets for more effective treatments of neuropsychiatric diseases.
In addition, Anderson’s laboratory is studying interneurons derived from stem cells made from skin samples of patients with chromosome 22q.11.2 deletion syndrome, a genetic disease which has long been studied at The Children’s Hospital of Philadelphia. In this multisystem disorder, about one third of patients have autistic spectrum disorders, and a partially overlapping third of patients develop schizophrenia. Investigating the roles of genes and signaling pathways in their model cells may reveal specific genes that are crucial in those patients with this syndrome who have neurodevelopmental problems.
National Institutes of Health researchers used the popular anti-wrinkle agent Botox to discover a new and important role for a group of molecules that nerve cells use to quickly send messages. This novel role for the molecules, called SNARES, may be a missing piece that scientists have been searching for to fully understand how brain cells communicate under normal and disease conditions.
"The results were very surprising," said Ling-Gang Wu, Ph.D., a scientist at NIH’s National Institute of Neurological Disorders and Stroke. "Like many scientists we thought SNAREs were only involved in fusion."

Every day almost 100 billion nerve cells throughout the body send thousands of messages through nearly 100 trillion communication points called synapses. Cell-to-cell communication at synapses controls thoughts, movements, and senses and could provide therapeutic targets for a number of neurological disorders, including epilepsy.
Nerve cells use chemicals, called neurotransmitters, to rapidly send messages at synapses. Like pellets inside shotgun shells, neurotransmitters are stored inside spherical membranes, called synaptic vesicles. Messages are sent when a carrier shell fuses with the nerve cell’s own shell, called the plasma membrane, and releases the neurotransmitter “pellets” into the synapse.
SNAREs (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) are three proteins known to be critical for fusion between carrier shells and nerve cell membranes during neurotransmitter release.
"Without SNAREs there is no synaptic transmission," said Dr. Wu.
Botulinum toxin, or Botox, disrupts SNAREs. In a study published in Cell Reports, Dr. Wu and his colleagues describe how they used Botox and similar toxins as tools to show that SNAREs may also be involved in retrieving message carrier shells from nerve cell membranes immediately after release.
To study this, the researchers used advanced electrical recording techniques to directly monitor in real time carrier shells being fused with and retrieved from nerve cell membranes while the cells sent messages at synapses. The experiments were performed on a unique synapse involved with hearing called the calyx of Held. As expected, treating the synapses with toxins reduced fusion. However Dr. Wu and his colleagues also noticed that the toxins reduced retrieval.
"The results were very surprising," said Dr. Wu. "Like many scientists we thought SNAREs were only involved in fusion."
For at least a decade scientists have known that carrier shells have to be retrieved before more messages can be sent. Retrieval occurs in two modes: fast and slow. A different group of molecules are known to control the slow mode.
"Until now most scientists thought fusion and retrieval were two separate processes controlled by different sets of molecules", said Dr. Wu.
Nevertheless several studies suggested that one of the SNARE molecules could be involved with both modes.
In this study, Dr. Wu and his colleagues systematically tested this idea to fully understand retrieval. The results showed that all three SNARE proteins may be involved in both fast and slow retrieval.
"Our results suggest that SNAREs link fusion and retrieval," said Dr. Wu.
The results may have broad implications. SNAREs are commonly used by other cells throughout the body to release chemicals. For example, SNAREs help control the release of insulin from pancreas cells, making them a potential target for diabetes treatments. Recent studies suggest that SNAREs may be involved in neurological and psychiatric disorders, such as schizophrenia and spastic ataxia.
"We think SNARES work like this in most nerve cell synapses. This new role could change the way scientists think about how SNAREs are involved in neuronal communication and diseases," said Dr. Wu.
A new study led by University of North Carolina School of Medicine researchers is the first to identify a genetic risk factor for persistent pain after traumatic events such as motor vehicle collision and sexual assault.
In addition, the study contributes further evidence that persistent pain after stressful events has a specific biological basis. A manuscript of the study was published online ahead of print by the journal Pain on April 29.
“Our study findings indicate that mechanisms influencing chronic pain development may be related to the stress response, rather than any specific injury caused by the traumatic event,” said Samuel McLean, MD, MPH, senior author of the study and assistant professor of anesthesiology. “In other words, our results suggest that in some individuals something goes wrong with the body’s ‘fight or flight’ response or the body’s recovery from this response, and persistent pain results.”
The study assessed the role of the hypothalamic-pituitary adrenal (HPA) axis, a physiologic system of central importance to the body’s response to stressful events. The study evaluated whether the HPA axis influences musculoskeletal pain severity six weeks after motor vehicle collision (MVC) and sexual assault. Its findings revealed that variation in the gene encoding for the protein FKBP5, which plays an important role in regulating the HPA axis response to stress, was associated with a 20 percent higher risk of moderate to severe neck pain six weeks after a motor vehicle collision, as well as a greater extent of body pain. The same variant also predicted increased pain six weeks after sexual assault.
"Right now, if an someone comes to the emergency department after a car accident, we don’t have any interventions to prevent chronic pain from developing," McLean said. Similarly, if a woman comes to the emergency department after sexual assault, we have medications to prevent pregnancy or sexually transmitted disease, but no treatments to prevent chronic pain. This is because we understand what causes pregnancy or infection, but we have no idea what the biologic mechanisms are that cause chronic pain. Chronic pain after these events is common and can cause great suffering, and there is an urgent need to understand what causes chronic pain so that we can start to develop interventions. This study is an important first step in developing this understanding."
"In addition, because we don’t understand what causes these outcomes, individuals with chronic pain after traumatic events are often viewed with suspicion, as if they are making up their symptoms for financial gain or having a psychological reaction," McLean said. "An improved understanding of the biology helps with this stigma," McLean said.
A team of researchers at the University of Calgary’s Hotchkiss Brain Institute (HBI) have discovered that adult brain cell production might be determined, in part, by the early parental environment. The study suggests that dual parenting may be more beneficial than single parenting.

Scientists studied mouse pups that were raised by either dual or single parents and found that adult cell production in the brain might be triggered by early life experiences. The scientists also found that the increased adult brain cell production varied based on gender. Specifically, female pups raised by two parents had enhanced white matter production as adults, increasing motor coordination and sociability. Male pups raised by dual parents displayed more grey matter production as an adult, which improves learning and memory.
“Our new work adds to a growing body of knowledge, which indicates that early, supportive experiences have long lasting, positive impact on adult brain function,” says Samuel Weiss, PhD, senior author of the study and director of the HBI.
Surprisingly, the advantages of dual parenting were also passed along when these two groups reproduced, even if their offspring were raised by one female. The advantages of dual parenting were thus passed along to the next generation.
To conduct the study, scientists divided mice into three groups i) pups raised to adulthood by one female ii) pups raised to adulthood by one female and one male and iii) pups raised to adulthood by two females. Researchers then waited for the offspring to reach adulthood to find out if there was any impact on brain cell production.
Scientists say that this research provides evidence that, in the mouse model, parenting and the environment directly impact adult brain cell production. While it’s not known at this point, it is possible that similar effects could be seen in other mammals, such as humans. The study is published in the May 1 edition of PLOS ONE.
Abuse during childhood is different.

A study of adult civilians with PTSD (post-traumatic stress disorder) has shown that individuals with a history of childhood abuse have distinct, profound changes in gene activity patterns, compared to adults with PTSD but without a history of child abuse.
A team of researchers from Atlanta and Munich probed blood samples from 169 participants in the Grady Trauma Project, a study of more than 5000 Atlanta residents with high levels of exposure to violence, physical and sexual abuse and with high risk for civilian PTSD.
The results were published Monday, April 29 in Proceedings of the National Academy of Sciences, Early Edition.
“These are some of the most robust findings to date showing that different biological pathways may describe different subtypes of a psychiatric disorder, which appear similar at the level of symptoms but may be very different at the level of underlying biology,” says Kerry Ressler, MD, PhD, professor of psychiatry and behavioral sciences at Emory University School of Medicine and Yerkes National Primate Research Center.
“As these pathways become better understood, we expect that distinctly different biological treatments would be implicated for therapy and recovery from PTSD based on the presence or absence of past child abuse.”
Ressler, a Howard Hughes Medical Institute Investigator, is co-director of the Grady Trauma Project, along with co-author Bekh Bradley, PhD, assistant professor of psychiatry and behavioral sciences at Emory and director of the Trauma Recovery Program at the Atlanta Veterans Affairs Medical Center.
The first author of the paper is Divya Mehta, PhD, a postdoctoral fellow in Munich. The senior author is Elisabeth Binder, MD, PhD, associate professor of psychiatry and behavioral sciences at Emory and group leader at the Max-Planck Institute of Psychiatry in Munich, Germany.
Mehta and her colleagues examined changes in the patterns of which genes were turned on and off in blood cells from patients. They also looked at patterns of methylation, a DNA modification on top of the four letters of the genetic code that causes genes to be ‘silenced’ or made inactive.
Study participants were divided into three groups: people who experienced trauma without developing PTSD, people with PTSD who were exposed to child abuse, and people with PTSD who were not exposed to child abuse.
The researchers were surprised to find that although hundreds of genes had significant changes in activity in the PTSD with and without child abuse groups, there was very little overlap in patterns between these groups. The two groups shared similar symptoms of PTSD, which include intrusive thoughts such as nightmares and flashbacks, avoidance of trauma reminders, and symptoms of hyperarousal and hypervigilance.
The PTSD with child abuse group displayed more changes in genes linked with development of the nervous system and regulation of the immune system, while the PTSD minus child abuse group displayed more changes in genes linked with apoptosis (cell death) and growth rate regulation. In addition, changes in methylation were more frequent in the PTSD with child abuse group. The authors believe that these biological pathways may lead to different mechanisms of PTSD symptom formation within the brain.
The Max Planck/Emory scientists were probing gene activity in blood cells, rather than brain tissue. Similar results have been obtained by researchers studying the influence of child abuse on the brains of people who had committed suicide.
“Traumatic events that happen in childhood are embedded in the cells for a long time,” Binder says. “Not only the disease itself, but the individual’s life experience is important in the biology of PTSD, and this should be to be reflected in the way we treat these disorders.”
When a pedestrian hears the screech of a car’s brakes, she has to decide whether, and if so, how, to move in response. Is the action taking place blocks away, or 20 feet to the left?
One of the truly primal mechanisms that we depend on every day of our lives — acting on the basis of information gathered by our sense of hearing — is yielding its secrets to modern neuroscience. A team of researchers from Cold Spring Harbor Laboratory (CSHL) today publishes experimental results in the journal Nature which they describe as surprising. The results fill in a key piece of the puzzle about how mammals act on the basis of sound cues.
It’s well known that sounds detected by the ears wind up in a part of the brain called the auditory cortex, where they are translated – transduced – into information that scientists call representations. These representations, in turn, form the informational basis upon which other parts of the brain can make decisions and issue commands for specific actions. What scientists have not understood is what happens between the auditory cortex and portions of the brain that ultimately issue commands, say, for muscles to move in response to the sound of that car’s screeching brakes.
To find out, CSHL Professor Anthony Zador and Dr. Petr Znamenskiy trained rats to listen to sounds and to make decisions based on those sounds. When a high-frequency sound is played, the animals are rewarded if they move to the left. When the sound is low-pitched, the reward is given if the animal moves right.

To the striatum
On the simplest level, says Zador, “we know that sound is coming into the ear; and we know what’s coming out in the end – a decision,” in the form of a muscle movement. The surprise, he says, is the destination of the information used by the animal to perform this task of discriminating between sounds of high and low frequency, as revealed in his team’s experiments.
“It turns out the information passes through a particular subset of neurons in the auditory cortex whose axons wind up in another part of the brain, called the striatum,” says Zador. The classic series of experiments that provided inspiration and a model for this work, performed at Stanford University by William Newsome and colleagues, involved the visual system of primates, and had led Zador to expect by analogy that representations formed in the auditory cortex would lead to other locations within the cortex.
These experiments in rats have implications for how neural circuits make decisions, according to Zador. Even though many neurons in auditory cortex are “tuned” to low or high frequencies, most do not transmit their information directly to the striatum. Rather, their information is transmitted by a much smaller number of neurons in their vicinity, which convey their “votes” directly to the striatum.
“This is like the difference between a direct democracy and a representative democracy, of the type we have in the United States,” Zador explains. “In a direct democracy model of how the auditory cortex conveys information to the rest of the brain, every neuron activated by a low- or high-pitched sound would have a ‘vote.’ Since there is noise in every perception, some minority of neurons will indicate ‘low’ when the sound is in fact ‘high,’ and vice-versa. In the direct democracy model, the information sent to the striatum for further action would be the equivalent of a simple sum of all these votes.
“In contrast – and this is what we found to be the case – the neurons registering ‘high’ and ‘low’ are represented by a specialized subset of neurons in their local area, which we might liken to members of Congress or the Electoral College: these in turn transmit the votes of the larger population to the place — in this case the auditory striatum — in which decisions are made and actions are taken.”
While the search continues for the Fountain of Youth, researchers may have found the body’s “fountain of aging”: the brain region known as the hypothalamus. For the first time, scientists at Albert Einstein College of Medicine of Yeshiva University report that the hypothalamus of mice controls aging throughout the body. Their discovery of a specific age-related signaling pathway opens up new strategies for combating diseases of old age and extending lifespan. The paper was published today in the online edition of Nature.

“Scientists have long wondered whether aging occurs independently in the body’s various tissues or if it could be actively regulated by an organ in the body,” said senior author Dongsheng Cai, M.D., Ph.D., professor of molecular pharmacology at Einstein. “It’s clear from our study that many aspects of aging are controlled by the hypothalamus. What’s exciting is that it’s possible — at least in mice — to alter signaling within the hypothalamus to slow down the aging process and increase longevity.”
The hypothalamus, an almond-sized structure located deep within the brain, is known to have fundamental roles in growth, development, reproduction, and metabolism. Dr. Cai suspected that the hypothalamus might also play a key role in aging through the influence it exerts throughout the body.
“As people age,” he said, “you can detect inflammatory changes in various tissues. Inflammation is also involved in various age-related diseases, such as metabolic syndrome, cardiovascular disease, neurological disease and many types of cancer.” Over the past several years, Dr. Cai and his research colleagues showed that inflammatory changes in the hypothalamus can give rise to various components of metabolic syndrome (a combination of health problems that can lead to heart disease and diabetes).
To find out how the hypothalamus might affect aging, Dr. Cai decided to study hypothalamic inflammation by focusing on a protein complex called NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells). “Inflammation involves hundreds of molecules, and NF-κB sits right at the center of that regulatory map,” he said.
In the current study, Dr. Cai and his team demonstrated that activating the NF-κB pathway in the hypothalamus of mice significantly accelerated the development of aging, as shown by various physiological, cognitive, and behavioral tests. “The mice showed a decrease in muscle strength and size, in skin thickness, and in their ability to learn — all indicators of aging. Activating this pathway promoted systemic aging that shortened the lifespan,” he said.
Conversely, Dr. Cai and his group found that blocking the NF-κB pathway in the hypothalamus of mouse brains slowed aging and increased median longevity by about 20 percent, compared to controls.
The researchers also found that activating the NF-κB pathway in the hypothalamus caused declines in levels of gonadotropin-releasing hormone (GnRH), which is synthesized in the hypothalamus. Release of GnRH into the blood is usually associated with reproduction. Suspecting that reduced release of GnRH from the brain might contribute to whole-body aging, the researchers injected the hormone into a hypothalamic ventricle (chamber) of aged mice and made the striking observation that the hormone injections protected them from the impaired neurogenesis (the creation of new neurons in the brain) associated with aging. When aged mice received daily GnRH injections for a prolonged period, this therapy exerted benefits that included the slowing of age-related cognitive decline, probably the result of neurogenesis.
According to Dr. Cai, preventing the hypothalamus from causing inflammation and increasing neurogenesis via GnRH therapy are two potential strategies for increasing lifespan and treating age-related diseases. This technology is available for licensing.