In a study led by Duke researchers, monkeys have learned to control the movement of both arms on an avatar using just their brain activity.
The findings, published Nov. 6, 2013, in the journal Science Translational Medicine, advance efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients.
To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date
Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices. The brain-machine interface approach, pioneered at the Duke University Center for Neuroengineering in the early 2000s, holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.
“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” said senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”
Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.
The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.
As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggest that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.
The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.
The study suggests that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions. Small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.
“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”
Nicolelis is incorporating the study’s findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton, which is currently being developed, during the opening ceremony of the 2014 FIFA World Cup.
TAU researchers study the long-term effects of torture on the human pain system

Israeli soldiers captured during the 1973 Yom Kippur War were subjected to brutal torture in Egypt and Syria. Held alone in tiny, filthy spaces for weeks or months, sometimes handcuffed and blindfolded, they suffered severe beatings, burns, electric shocks, starvation, and worse. And rather than receiving treatment, additional torture was inflicted on existing wounds.
Forty years later, research by Prof. Ruth Defrin of the Department of Physical Therapy in the Sackler Faculty of Medicine at Tel Aviv University shows that the ex-prisoners of war (POWs), continue to suffer from dysfunctional pain perception and regulation, likely as a result of their torture. The study — conducted in collaboration with Prof. Zahava Solomon and Prof. Karni Ginzburg of TAU’s Bob Shapell School of Social Work and Prof. Mario Mikulincer of the School of Psychology at the Interdisciplinary Center, Herzliya — was published in the European Journal of Pain.
"The human body’s pain system can either inhibit or excite pain. It’s two sides of the same coin," says Prof. Defrin. "Usually, when it does more of one, it does less of the other. But in Israeli ex-POWs, torture appears to have caused dysfunction in both directions. Our findings emphasize that tissue damage can have long-term systemic effects and needs to be treated immediately."
A painful legacy
The study focused on 104 combat veterans of the Yom Kippur War. Sixty of the men were taken prisoner during the war, and 44 of them were not. In the study, all were put through a battery of psychophysical pain tests — applying a heating device to one arm, submerging the other arm in a hot water bath, and pressing a nylon fiber into a middle finger. They also filled out psychological questionnaires.
The ex-POWs exhibited diminished pain inhibition (the degree to which the body eases one pain in response to another) and heightened pain excitation (the degree to which repeated exposure to the same sensation heightens the resulting pain). Based on these novel findings, the researchers conclude that the torture survivors’ bodies now regulate pain in a dysfunctional way.
It is not entirely clear whether the dysfunction is the result of years of chronic pain or of the original torture itself. But the ex-POWs exhibited worse pain regulation than the non-POW chronic pain sufferers in the study. And a statistical analysis of the test data also suggested that being tortured had a direct effect on their ability to regulate pain.
Head games
The researchers say non-physical torture may have also contributed to the ex-POWs’ chronic pain. Among other forms of oppression and humiliation, the ex-POWs were not allowed to use the toilet, cursed at and threatened, told demoralizing misinformation about their loved ones, and exposed to mock executions. In the later stages of captivity, most of the POWs were transferred to a group cell, where social isolation was replaced by intense friction, crowding, and loss of privacy.
"We think psychological torture also affects the physiological pain system," says Prof. Defrin. "We still have to fully analyze the data, but preliminary analysis suggests there is a connection."
For long, brain development and maturation has been thought to be a one-way process, in which plasticity diminishes with age. The possibility that the adult brain can revert to a younger state and regain plasticity has not been considered, often. In a paper appearing on November 4 in the online open-access journal Molecular Brain, Dr. Tsuyoshi Miyakawa and his colleagues from Fujita Health University show that chronic administration of one of the most widely used antidepressants fluoxetine (FLX, which is also known by trade names like Prozac, Sarafem, and Fontex and is a selective serotonin reuptake inhibitor) can induce a juvenile-like state in specific types of neurons in the prefrontal cortex of adult mice.
In their study, FLX-treated adult mice showed reduced expression of parvalbumin and perineuronal nets, which are molecular markers for maturation and are expressed in a certain group of mature neurons in adults, and increased expression of an immature marker, which typically appears in developing juvenile brains, in the prefrontal cortex. These findings suggest the possibility that certain types of adult neurons in the prefrontal cortex can partially regain a youth-like state; the authors termed this as induced-youth or iYouth. These researchers as well as other groups had previously reported similar effects of FLX in the hippocampal dentate gyrus, basolateral amygdala, and visual cortex, which were associated with increased neural plasticity in certain types of neurons. This study is the first to report on “iYouth” in the prefrontal cortex, which is the brain region critically involved in functions such as working memory, decision-making, personality expression, and social behavior, as well as in psychiatric disorders related to deficits in these functions.
Network dysfunction in the prefrontal cortex and limbic system, including the hippocampus and amygdala, is known to be involved in the pathophysiology of depressive disorders. Reversion to a youth-like state may mediate some of the therapeutic effects of FLX by restoring neural plasticity in these regions. On the other hand, some non-preferable aspects of FLX-induced pseudo-youth may play a role in certain behavioral effects associated with FLX treatment, such as aggression, violence, and psychosis, which have recently received attention as adverse effects of FLX. Interestingly, expression of the same molecular markers of maturation, as discussed in this study, has been reported to be decreased in the prefrontal cortex of postmortem brains of patients with schizophrenia. This raises the possibility that some of FLX’s adverse effects may be attributable to iYouth in the same type of neurons in this region. Currently, basic knowledge on this is lacking, and there are several unanswered questions like: What are the molecular and cellular mechanisms underlying iYouth? What are the differences between actual youth and iYouth? Is iYouth good or bad? Future studies to answer these questions could potentially revolutionize the prevention and/or treatment of various neuropsychiatric disorders and aid in improving the quality of life for an aging population.
More than two decades ago, Ryan Vincent had open brain surgery to remove a malignant brain tumor, resulting in a lengthy hospital stay and weeks of recovery at home. Recently, neurosurgeons at Houston Methodist Hospital removed a different lesion from Vincent’s brain through a tube inserted into a hole smaller than a dime and he went home the next day.

Gavin Britz, MBBCh, MPH, FAANS, chairman of neurosurgery at Houston Methodist Neurological Institute, used a minimally-invasive technique to remove a vascular lesion from deep within the 44-year-old patient’s brain, the first to use this technique in the region. Traditionally, vascular lesions or brain tumors that are located deep within the brain can cause damage just by surgical removal.
“With this new approach, we can navigate through millions of important brain fibers and tracts to access deep areas of the brain where these benign tumors or hemorrhages are located with minimal injury to normal brain,” said Britz. “Ryan’s surgery took less than an hour.”
Houston Methodist neurosurgeons Britz and David Baskin, M.D., director of the Kenneth R. Peak Brain & Pituitary Tumor Center, are using this “six-pillar approach” that encompasses the latest technology in minimally-invasive surgeries — mapping of the brain; navigating the brain like a GPS system; safely accessing the brain and tumor/lesion; using high-end optics for visualization; successfully removing the tumor without disrupting tissues around it; and directed therapy using tissue collected for evaluation that can then be used for personalized treatments.
The new surgical technique is used to remove cancerous and non-cancerous tumors, lesions and cysts deep inside the brain. This approach reduces risks of damage to speech, memory, muscle strength, balance, vision, coordination and other function areas of the brain.
A stem cell therapy previously shown to reduce inflammation in the critical time window after traumatic brain injury also promotes lasting cognitive improvement, according to preclinical research led by Charles Cox, M.D., at The University of Texas Health Science Center at Houston (UTHealth) Medical School.
The research was published in today’s issue of STEM CELLS Translational Medicine.
Cellular damage in the brain after traumatic injury can cause severe, ongoing neurological impairment and inflammation. Few pharmaceutical options exist to treat the problem. About half of patients with severe head injuries need surgery to remove or repair ruptured blood vessels or bruised brain tissue.
A stem cell treatment known as multipotent adult progenitor cell (MAPC) therapy has been found to reduce inflammation in mice immediately after traumatic brain injury, but no one had been able to gauge its usefulness over time.
The research team led by Cox, the Children’s Fund, Inc. Distinguished Professor of Pediatric Surgery at the UTHealth Medical School, injected two groups of brain-injured mice with MAPCs two hours after the mice were injured and again 24 hours later. One group received a dose of 2 million cells per kilogram and the other a dose five times stronger.
After four months, the mice receiving the stronger dose not only continued to have less inflammation—they also made significant gains in cognitive function. A laboratory examination of the rodents’ brains confirmed that those receiving the higher dose of MAPCs had better brain function than those receiving the lower dose.
“Based on our data, we saw improved spatial learning, improved motor deficits and fewer active antibodies in the mice that were given the stronger concentration of MAPCs,” Cox said.
The study indicates that intravenous injection of MAPCs may in the future become a viable treatment for people with traumatic brain injury, he said.
A paper published in a special edition of the journal Science proposes a novel understanding of brain architecture using a network representation of connections within the primate cortex. Zoltán Toroczkai, professor of physics at the University of Notre Dame and co-director of the Interdisciplinary Center for Network Science and Applications, is a co-author of the paper “Cortical High-Density Counterstream Architectures.”

Using brain-wide and consistent tracer data, the researchers describe the cortex as a network of connections with a “bow tie” structure characterized by a high-efficiency, dense core connecting with “wings” of feed-forward and feedback pathways to the rest of the cortex (periphery). The local circuits, reaching to within 2.5 millimeters and taking up more than 70 percent of all the connections in the macaque cortex, are integrated across areas with different functional modalities (somatosensory, motor, cognitive) with medium- to long-range projections.
The authors also report on a simple network model that incorporates the physical principle of entropic cost to long wiring and the spatial positioning of the functional areas in the cortex. They show that this model reproduces the properties of the connectivity data in the experiments, including the structure of the bow tie. The wings of the bow tie emerge from the counterstream organization of the feed-forward and feedback nature of the pathways. They also demonstrate that, contrary to previous beliefs, such high-density cortical graphs can achieve simultaneously strong connectivity (almost direct between any two areas), communication efficiency, and economy of connections (shown via optimizing total wire cost) via weight-distance correlations that are also consequences of this simple network model.
This bow tie arrangement is a typical feature of self-organizing information processing systems. The paper notes that the cortex has some analogies with information-processing networks such as the World Wide Web, as well as metabolism, the immune system and cell signaling. The core-periphery bow tie structure, they say, is “an evolutionarily favored structure for a wide variety of complex networks” because “these systems are not in thermodynamic equilibrium and are required to maintain energy and matter flow through the system.” The brain, however, also shows important differences from such systems. For example, destination addresses are encoded in information packets sent along the Internet, apparently unlike in the brain, and location and timing of activity are critical factors of information processing in the brain, unlike in the Internet.
“Biological data is extremely complex and diverse,” Toroczkai said. “However, as a physicist, I am interested in what is common or invariant in the data, because it may reveal a fundamental organizational principle behind a complex system. A minimal theory that incorporates such principle should reproduce the observations, if not in great detail, but in extent. I believe that with additional consistent data, as those obtained by the Kennedy team, the fundamental principles of massive information processing in brain neuronal networks are within reach.”
A study in The Journal of Cell Biology describes how neurons activate the protein PP1, providing key insights into the biology of learning and memory.
PP1 is known to be a key regulator of synaptic plasticity, the phenomenon in which neurons remodel their synaptic connections in order to store and relay information—the foundation of learning and memory. But how PP1 is controlled has been unclear. Now, a team led by researchers from the LSU Health Science Center describes several mechanisms for PP1 regulation that close some major gaps in our understanding of its role in neuronal signaling.
Among the novel findings, the researchers describe how the neurotransmitter NMDA leads to activation of PP1. They show that, when NMDA activates neuronal synapses, it switches off an enzyme, Cdk5, that would otherwise inhibit PP1. This allows PP1 to activate itself and promote synaptic remodeling. In addition, the researchers suggest that, despite its name, a regulatory protein called inhibitor-2 helps promote PP1 activity in neurons. Together, these findings significantly extend our understanding of how PP1 is regulated in the context of synaptic plasticity.
For the first time in a large study sample, the decline in brain function in normal aging is conclusively shown to be influenced by genes, say researchers from the Texas Biomedical Research Institute and Yale University.

“Identification of genes associated with brain aging should improve our understanding of the biological processes that govern normal age-related decline,” said John Blangero, Ph.D., a Texas Biomed geneticist and the senior author of the paper. The study, funded by the National Institutes of Health (NIH), is published in the November 4, 2013 issue of the Proceedings of the National Academy of Sciences. David Glahn, Ph.D., an associate professor of psychiatry at the Yale University School of Medicine, is the first author on the paper.
In large pedigrees including 1,129 people aged 18 to 83, the scientists documented profound aging effects from young adulthood to old age, on neurocognitive ability and brain white matter measures. White matter actively affects how the brain learns and functions. Genetic material shared amongst biological relatives appears to predict the observed changes in brain function with age.
Participants were enrolled in the Genetics of Brain Structure and Function Study and drawn from large Mexican Americans families in San Antonio. Brain imaging studies were conducted at the University of Texas Health Science Center at San Antonio Research Imaging Institute directed by Peter Fox, M.D.
“The use of large human pedigrees provides a powerful resource for measuring how genetic factors change with age,” Blangero said.
By applying a sophisticated analysis, the scientists demonstrated a heritable basis for neurocognitive deterioration with age that could be attributed to genetic factors. Similarly, decreasing white matter integrity with age was influenced by genes., The investigators further demonstrated that different sets of genes are responsible for these two biological aging processes.
“A key advantage of this study is that we specifically focused on large extended families and so we were able to disentangle genetic from non-genetic influences on the aging process,” said Glahn.
50 years after valproate was first discovered, research published today in the journal Neurobiology of Disease, reports how the drug works to block seizure progression.

Valproate (variously labelled worldwide as Epilim, Depacon, Depakene, Depakote, Orlept, Episenta, Orfiril, and Convulex) is one of the world’s most highly prescribed treatments for epilepsy. It was first discovered to be an effective treatment for epilepsy, by accident, in 1963 by a group of French scientists. In thousands of subsequent experiments, animals have been used to investigate how valproate blocks seizures, without success. Scientists from Royal Holloway and University College London have now identified how valproate blocks seizures in the brain, by using a simple amoeba.
“The discovery of how valproate blocks seizures, initially using the social amoeba Dictyostelium, and then replicated using accepted seizure models, highlights the successful use of non-animal testing in biomedical research,” said Professor Robin Williams from the School of Biological Sciences at Royal Holloway.
“Sodium valproate is one of the most effective antiepileptic drugs in many people with epilepsy, but its use has been limited by side-effects, in particular its effect in pregnant women on the unborn child,” said Professor Matthew Walker from the Institute of Neurology at University College London. “Understanding valproate’s mechanism of action is a first step to developing even more effective drugs that lack many of valproate’s side-effects.
“Our study also found that the decrease of a specific chemical in the brain at the start of the seizure causes even more seizure activity. This holds important implications for identifying underlying causes,” added Professor Williams.
Kessler researchers find aerobic exercise benefits memory in persons with multiple sclerosis

A research study headed by Victoria Leavitt, Ph.D. and James Sumowski, Ph.D., of Kessler Foundation, provides the first evidence for beneficial effects of aerobic exercise on brain and memory in individuals with multiple sclerosis (MS). The article, “Aerobic exercise increases hippocampal volume and improves memory in multiple sclerosis: Preliminary findings,” was released as an epub ahead of print on October 4 by Neurocase: The Neural Basis of Cognition. The study was funded by Kessler Foundation.
Hippocampal atrophy seen in MS is linked to the memory deficits that affect approximately 50% of individuals with MS. Despite the prevalence of this disabling symptom, there are no effective pharmacological or behavioral treatments. “Aerobic exercise may be the first effective treatment for MS patients with memory problems,” noted Dr. Leavitt, research scientist in Neuropsychology & Neuroscience Research at Kessler Foundation. “Moreover, aerobic exercise has the advantages of being readily available, low cost, self-administered, and lacking in side effects.” No beneficial effects were seen with non-aerobic exercise. Dr. Leavitt noted that the positive effects of aerobic exercise were specific to memory; other cognitive functions such as executive functioning and processing speed were unaffected.
The study’s participants were two MS patients with memory deficits who were randomized to non-aerobic (stretching) and aerobic (stationary cycling) conditions. Baseline and follow-up measurements were recorded before and after the treatment protocol of 30-minute exercise sessions 3 times per week for 3 months. Data were collected by high-resolution MRI (neuroanatomical volumes), fMRI (functional connectivity), and memory assessment. Aerobic exercise resulted in a 16.5% increase in hippocampal volume, a 53.7% increase in memory, and increased hippocampal resting-state functional connectivity. Non-aerobic exercise resulted in minimal change in hippocampal volume and no changes in memory or functional connectivity.
“These findings clearly warrant large-scale clinical trials of aerobic exercise for the treatment of memory deficits in the MS population,” said James Sumowski„ Ph.D., research scientist in Neuropsychology & Neuroscience Research at Kessler Foundation.
It was once thought that each cell in a person’s body possesses the same DNA code and that the particular way the genome is read imparts cell function and defines the individual. For many cell types in our bodies, however, that is an oversimplification. Studies of neuronal genomes published in the past decade have turned up extra or missing chromosomes, or pieces of DNA that can copy and paste themselves throughout the genomes.
The only way to know for sure that neurons from the same person harbor unique DNA is by profiling the genomes of single cells instead of bulk cell populations, the latter of which produce an average. Now, using single-cell sequencing, Salk Institute researchers and their collaborators have shown that the genomic structures of individual neurons differ from each other even more than expected. The findings were published November 1, 2013, in Science.
"Contrary to what we once thought, the genetic makeup of neurons in the brain aren’t identical, but are made up of a patchwork of DNA," says corresponding author Fred Gage, Salk’s Vi and John Adler Chair for Research on Age-Related Neurodegenerative Disease.
In the study, led by Mike McConnell, a former junior fellow in the Crick-Jacobs Center for Theoretical and Computational Biology at the Salk, researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome—looking for large deletions and duplications of DNA called copy number variations or CNVs—and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn’t passed down from a parent. The CNVs are spread throughout the genome, the team found.
The miniscule amount of DNA in a single cell has to be chemically amplified many times before it can be sequenced. This process is technically challenging, so the team spent a year ruling out potential sources of error in the process.
"A good bit of our study was doing control experiments to show that this is not an artifact," says Gage. "We had to do that because this was such a surprise—finding out that individual neurons in your brain have different DNA content."
The group found a similar amount of variability in CNVs within individual neurons derived from the skin cells of three healthy people. Scientists routinely use such induced pluripotent stem cells (iPSCs) to study living neurons in a culture dish. Because iPSCs are derived from single skin cells, one might expect their genomes to be the same.
"The surprising thing is that they’re not," says Gage. "There are quite a few unique deletions and amplifications in the genomes of neurons derived from one iPSC line."
Interestingly, the skin cells themselves are genetically different, though not nearly as much as the neurons. This finding, along with the fact that the neurons had unique CNVs, suggests that the genetic changes occur later in development and are not inherited from parents or passed to offspring.
It makes sense that neurons have more diverse genomes than skin cells do, says McConnell, who is now an assistant professor of biochemistry and molecular genetics at the University of Virginia School of Medicine in Charlottesville. “The thing about neurons is that, unlike skin cells, they don’t turn over, and they interact with each other,” he says. “They form these big complex circuits, where one cell that has CNVs that make it different can potentially have network-wide influence in a brain.”
Spontaneously occurring CNVs have also been linked to risk for brain disorders such as schizophrenia and autism, but those studies usually pool many blood cells. As a result, the CNVs uncovered in those studies affect many if not all cells, which suggests that they arise early in development.
The purpose of CNVs in the healthy brain is still unclear, but researchers have some ideas. The modifications might help people adapt to new surroundings encountered over a lifetime, or they might help us survive a massive viral infection. The scientists are working out ways to alter genomic variability in iPSC-derived neurons and challenge them in specific ways in the culture dish.
Cells with different genomes probably produce unique RNA and then proteins. However, for now, only one sequencing technology can be applied to a single cell.
"If and when more than one method can be applied to a cell, we will be able to see whether cells with different genomes have different transcriptomes (the collection of all the RNA in a cell) in predictable ways," says McConnell.
In addition, it will be necessary to sequence many more cells, and in particular, more cell types, notes corresponding author Ira Hall, an associate professor of biochemistry and molecular genetics at the University of Virginia. “There’s a lot more work to do to really understand to what level we think the things we’ve found are neuron-specific or associated with different parameters like age or genotype,” he says.
Excessive fear can develop after a traumatic experience, leading to anxiety disorders such as post-traumatic stress disorder and phobias. During exposure therapy, an effective and common treatment for anxiety disorders, the patient confronts a fear or memory of a traumatic event in a safe environment, which leads to a gradual loss of fear. A new study in mice, published online today in Neuron, reports that exposure therapy remodels an inhibitory junction in the amygdala, a brain region important for fear in mice and humans. The findings improve our understanding of how exposure therapy suppresses fear responses and may aid in developing more effective treatments. The study, led by researchers at Tufts University School of Medicine and the Sackler School of Graduate Biomedical Sciences at Tufts, was partially funded by a New Innovator Award from the Office of the Director at the National Institutes of Health.

A fear-inducing situation activates a small group of neurons in the amygdala. Exposure therapy silences these fear neurons, causing them to be less active. As a result of this reduced activity, fear responses are alleviated. The research team sought to understand how exactly exposure therapy silences fear neurons.
The researchers found that exposure therapy not only silences fear neurons but also induces remodeling of a specific type of inhibitory junction, called the perisomatic synapse. Perisomatic inhibitory synapses are connections between neurons that enable one group of neurons to silence another group of neurons. Exposure therapy increases the number of perisomatic inhibitory synapses around fear neurons in the amygdala. This increase provides an explanation for how exposure therapy silences fear neurons.
“The increase in number of perisomatic inhibitory synapses is a form of remodeling in the brain. Interestingly, this form of remodeling does not seem to erase the memory of the fear-inducing event, but suppresses it,” said senior author, Leon Reijmers, Ph.D., assistant professor of neuroscience at Tufts University School of Medicine and member of the neuroscience program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts.
Reijmers and his team discovered the increase in perisomatic inhibitory synapses by imaging neurons activated by fear in genetically manipulated mice. Connections in the human brain responsible for suppressing fear and storing fear memories are similar to those found in the mouse brain, making the mouse an appropriate model organism for studying fear circuits.
Mice were placed in a box and experienced a fear-inducing situation to create a fear response to the box. One group of mice, the control group, did not receive exposure therapy. Another group of mice, the comparison group, received exposure therapy to alleviate the fear response. For exposure therapy, the comparison group was repeatedly placed in the box without experiencing the fear-inducing situation, which led to a decreased fear response in these mice. This is also referred to as fear extinction.
The researchers found that mice subjected to exposure therapy had more perisomatic inhibitory synapses in the amygdala than mice who did not receive exposure therapy. Interestingly, this increase was found around fear neurons that became silent after exposure therapy.
“We showed that the remodeling of perisomatic inhibitory synapses is closely linked to the activity state of fear neurons. Our findings shed new light on the precise location where mechanisms of fear regulation might act. We hope that this will lead to new drug targets for improving exposure therapy,” said first author, Stéphanie Trouche, Ph.D., a former postdoctoral fellow in Reijmers’ lab at Tufts and now a medical research council investigator scientist at the University of Oxford in the United Kingdom.
“Exposure therapy in humans does not work for every patient, and in patients that do respond to the treatment, it rarely leads to a complete and permanent suppression of fear. For this reason, there is a need for treatments that can make exposure therapy more effective,” Reijmers added.
It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.
While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.
Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.
The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.
“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”
The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.
Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.
“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”
The nickelate system is also well positioned for seamless integration into existing silicon-based systems.
“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.
“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”
In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”
He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.
“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.
For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.
“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”
“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”
Many animals have highly developed senses, such as vision in carnivores, touch in mice, and hearing in bats. New research from the RIKEN Brain Science Institute has uncovered a brain molecule that can explain the existence of such finely-tuned sensory capabilities, revealing how brain cells responsible for specific senses are positioned to receive incoming sensory information.

The study, led by Dr. Tomomi Shimogori and published in the journal Science, sought to uncover the molecule that enables high acuity sensing by examining brain regions that receive information from the senses. They found that areas responsible for touch in mice and vision in ferrets contain a protein called BTBD3 that optimizes neuronal shape to receive sensory input more efficiently.
Neurons have a highly specialized shape, sending signals through one long projection called an axon, while receiving signals from many branch-like projections called dendrites. The final shape and connections to other neurons are typically completed after birth. Some neurons have dendrites distributed equally all around the cell body, like a starfish, while in others they extend only from one side, like a squid, steering towards axons that are actively bringing in information from the peripheral nerves. It was previously unknown what enables neurons to have highly oriented dendrites.
“We were fascinated by the dendrite patterning changes that occurred during the early postnatal stage that is controlled by neuronal input,” says Dr. Shimogori. “We found a fundamental process that is important to remove unnecessary dendrites to prevent mis-wiring and to make efficient neuronal circuits.”
The researchers searched for genes that are active exclusively in the mouse somatosensory cortex, the brain region responsible for their sense of touch, and found that the gene coding for the protein BTBD3 was active in the neurons of the barrel cortex, which receives input from their whiskers, the highly sensitive tactile sensors in mice, and that these neurons had unidirectional dendrites.
Using gene manipulations in embryonic mouse brain the authors found that eliminating BTBD3 made dendrites uniformly distribute around neurons in the mouse barrel cortex. In contrast, artificially introducing BTBD3 in the visual cortex of mice where BTBD3 is not normally found, reoriented the normally symmetrically positioned dendrites to one side. The same mechanism shaped neurons in the visual cortex of ferrets, which unlike the mouse contains BTBD3.
“High acuity sensory function may have been enabled by the evolution of BTBD3 and related proteins in brain development,” adds Dr. Shimogori. “Finding BTBD3 selectively in the visual and auditory cortex of the common marmoset, a species that relies heavily on high acuity vocal and visual communication for survival, and in mouse, where it is expressed in high-acuity tactile and olfactory areas, but not in low acuity visual cortex, supports this idea.” The authors plan to examine their theory by testing sensory function in mice without BTBD3 gene expression.
A discovery from Case Western Reserve and Cleveland Clinic researchers could provide epilepsy patients invaluable advance guidance about their chances to improve symptoms through surgery.
Assistant Professor of Neurosciences Roberto Fernández Galán, PhD, and his collaborators have identified a new, far more accurate way to determine precisely what portions of the brain suffer from the disease. This information can give patients and physicians better information regarding whether temporal lobe surgery will provide the results they seek.
“Our analysis of neuronal activity in the temporal lobe allows us to determine whether it is diseased, and therefore, whether removing it with surgery will be beneficial for the patient,” Galán said, the paper’s senior author. “In terms of accuracy and efficiency, our analysis method is a significant improvement relative to current approaches.”
The findings appear in research published October 30 in the open access journal PLOS ONE.
About one-third of patients with temporal lobe epilepsy do not respond to medical treatment and opt to do lobectomies to alleviate their symptoms. Yet the surgery’s success rate is only 60 to 70 percent because of the difficulties in identifying the diseased brain tissue prior to the procedures.
Galán and investigators from Cleveland Clinic determined that using intracranial electroencephalography (iEEG) to measure patients’ functional neural connectivity – that is, the communication from one brain region to another - identified the epileptic lobe with 87 percent accuracy. An iEEG records electrical activity with electrodes implanted in the brain. Key indicators of a diseased lobe are weak and similar connections.
In the retrospective study, Galán and Arun Antony, MD, formerly a senior clinical fellow in the Epilepsy Center at Cleveland Clinic and now an assistant professor of neurology at the University of Pittsburgh, examined data from 23 patients with temporal lobe epilepsy who had all or part of their temporal lobes removed after iEEG evaluations performed at Cleveland Clinic. The researchers examined the results of patients’ preoperative iEEG to determine the degree of functional connectivity that was associated with successful surgical outcomes.
“The concept of functional connectivity has been extensively studied by basic science researchers, but has not found a way into the realm of clinical epilepsy treatment yet,” Antony said, the paper’s first author. “Our discovery is another step towards the use of measures of functional connectivity in making clinical decisions in the treatment of epilepsy.”
As a standard preoperative test for lobectomy surgery, physicians analyze iEEG traces looking for simultaneous discharges of neurons that appear as spikes in the recordings, which indicate epileptic activity. This PLOS ONE discovery evaluates the data differently by examining normal brain activity in the absence of spikes and inferring connectivity.
Researchers at Johns Hopkins say they have found that a gene already implicated in human speech disorders and epilepsy is also needed for vocalizations and synapse formation in mice. The finding, they say, adds to scientific understanding of how language develops, as well as the way synapses — the connections among brain cells that enable us to think — are formed. A description of their experiments appears in Science Express on Oct. 31.

A group led by Richard Huganir, Ph.D., director of the Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, set out to investigate genes involved in synapse formation. Gek-Ming Sia, Ph.D., a research associate in Huganir’s laboratory, first screened hundreds of human genes for their effects on lab-grown mouse brain cells. When one gene, SRPX2, was turned up higher than normal, it caused the brain cells to erupt with new synapses, Sia found.
When Huganir’s team injected fetal mice with an SRPX2-blocking compound, the mice showed fewer synapses than normal mice even as adults, the researchers found. In addition, when SRPX2-deficient mouse pups were separated from their mothers, they did not emit high-pitched distress calls as other pups do, indicating they lacked the rodent equivalent of early language ability.
Other researchers’ analyses of the human genome have found that mutations in SRPX2 are associated with language disorders and epilepsy, and when Huganir’s team injected the human SRPX2 with the same mutations into the fetal mice, they also had deficits in their vocalization as young pups.
Another research group at Institut de Neurobiologie de la Méditerranée in France had previously shown that SRPX2 interacts with FoxP2, a gene that has gained wide attention for its apparently crucial role in language ability.
Huganir’s team confirmed this, showing that FoxP2 controls how much protein the SRPX2 gene makes and may affect language in this way. “FoxP2 is famous for its role in language, but it’s actually involved in other functions as well,” Huganir comments. “SRPX2 appears to be more specialized to language ability.” Huganir suspects that the gene may also be involved in autism, since autistic patients often have language impairments, and the condition has been linked to defects in synapse formation.
This study is only the beginning of teasing out how SRPX2 acts on the brain, Sia says. “We’d like to find out what other proteins it acts on, and how exactly it regulates synapses and enables language development.”
Neonatologists seem to perform miracles in the fight to support the survival of babies born prematurely.
To promote their survival, cortisol-like drugs called glucocorticoids are administered frequently to women in preterm labor to accelerate their babies’ lung maturation prior to birth. Cortisol is a substance naturally released by the body when stressed. But the levels of glucocorticoids administered to promote lung development are higher than that achieved with typical stress, perhaps only mirrored in the body’s reaction to extreme stresses.
The benefit of glucocorticoids is undisputed and has certainly saved the lives of countless babies, but this exposure also may have some negative consequences. Indeed, excessive glucocorticoid levels may have effects on brain development, perhaps contributing to emotional problems later in life.
In this issue of Biological Psychiatry, Dr. Elysia Davis at the University of Denver and her colleagues report new findings on the effects of synthetic glucocorticoid on human brain development. Their study focused on healthy children who were born full-term, avoiding the confounding effects of premature birth.
The investigators conducted brain imaging sessions in and carefully assessed 54 children, 6-10 years of age. The mothers of the participating children also completed reports on their child’s behavior. The researchers then divided the children into two groups: those who were exposed to glucocorticoids prenatally and those who were not.
In this study, children with fetal glucocorticoid exposure showed significant cortical thinning, and a thinner cortex also predicted more emotional problems. In one particularly affected part of the brain, the rostral anterior cingulate cortex, it was 8-9% thinner among children exposed to glucocorticoids. Interestingly, other studies have shown that this region of the brain is affected in individuals diagnosed with mood and anxiety disorders.
"Fetal exposure to a frequently administered stress hormone is associated with consequences for child brain development that persist for at least 6 to 10 years. These neurological changes are associated with increased risk for stress and emotional problems," Davis explained of their findings. "Importantly, these findings were observed among healthy children born full term."
Although such a finding does not indicate that glucocorticoids ‘caused’ these changes, the researchers did determine that the findings can’t be explained by any obvious confounding differences between the groups. The two groups did not differ on weight or gestational age at birth, apgar scores, maternal factors, or any other basic demographics. Thus, the findings do suggest that glucocorticoid administration may somehow alter the trajectory of brain development of exposed children.
"This study provides evidence that prenatal exposure to stress hormones shapes the construction of the fetal nervous system with consequences for the developing brain that persist into the preadolescent period," she added.
"This study highlights potential links between early cortisol exposure, cortical thinning and mood symptoms in children. It may provide important insights into the development of the brain and the long-term impact of maternal stress," commented Dr. John Krystal, Editor of Biological Psychiatry.
Our vision depends on exquisitely organized layers of cells within the eye’s retina, each with a distinct role in perception. Johns Hopkins researchers say they have taken an important step toward understanding how those cells are organized to produce what the brain “sees.” Specifically, they report identification of a gene that guides the separation of two types of motion-sensing cells, offering insight into how cellular layering develops in the retina, with possible implications for the brain’s cerebral cortex. A report on the discovery is published in the Nov. 1 issue of the journal Science.
“The separation of different types of cells into layers is critical to their ability to form the precise sets of connections with each other — the circuitry — that lets us process visual information,” says Alex Kolodkin, Ph.D., a professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience and an investigator at the Howard Hughes Medical Institute. “There is still much to learn about how that separation happens during development, but we’ve identified for the first time proteins that enable two very similar types of cells to segregate into their own distinct neuronal layers.”
Kolodkin’s research group specializes in studying how circuitry forms among neurons (brain and nerve cells). Past experiments revealed that two types of proteins, called semaphorins and plexins, help guide this process. In the current study, Lu Sun, a graduate student in Kolodkin’s laboratory, focused on the genes that carry the blueprint for these proteins in two of the 10 layers of cells in the mammalian retina.
Those two layers are made up of so-called starburst amacrine cells (SACs). One type of SAC, known as “Off,” detects motion by sensing decreases in the amount of light hitting the retina, while the other type, “On,” detects increases in light. Sun examined the amounts of several semaphorin and plexin proteins being made by each type of cell, and found that only the “On” SACs were making a semaphorin called Sema6A. Sema6A can only work in the retina by interacting with its receptor, a plexin called PlexA2, but Sun found both types of SAC were churning out roughly equal amounts of PlexA2.
Reasoning that Sema6A might be the key difference that enabled the “On” and “Off” SACs to segregate from one another, Kolodkin’s team analyzed mice in which the genes for either Sema6A, PlexA2 or both could be switched off, and looked at the effects of this manipulation on their retinas. “Knocking out” either gene during development led the “On” and “Off” layers to run together, the team found, and caused abnormalities in the “On” SACs’ tree-like extensions. However, the “Off” SACs, which hadn’t been using their Sema6A gene in the first place, still looked and functioned normally.
“When signaling between Sema6A and PlexA2 was lost, not only was layering compromised, but the ‘On’ SACs lost both their distinctive symmetrical appearance, and, importantly, their motion-detecting ability,” Sun says. “This is evidence that the beautiful symmetric shape that gives starburst amacrine cells their name is necessary for their function.”
Adds Kolodkin, “We hope that learning how layering occurs in these very specific cell types will help us begin sorting out how connections are made not just in the retina, but also in neurons throughout the nervous system. Layering also occurs in the cerebral cortex, for example, which is responsible for thought and consciousness, and we really want to know how this is organized during neural development.”
Gene regulation technology increases survival rates in mice with glioblastoma
Glioblastoma multiforme (GBM), the brain cancer that killed Sen. Edward Kennedy and kills approximately 13,000 Americans a year, is aggressive and incurable. Now a Northwestern University research team is the first to demonstrate delivery of a drug that turns off a critical gene in this complex cancer, increasing survival rates significantly in animals with the deadly disease.

Image: Researchers combined gold nanoparticles (in yellow) with small interfering RNAs (in green) to knock down an oncogene that is overexpressed in glioblastoma.
The novel therapeutic, which is based on nanotechnology, is small and nimble enough to cross the blood-brain barrier and get to where it is needed — the brain tumor. Designed to target a specific cancer-causing gene in cells, the drug simply flips the switch of the troublesome oncogene to “off,” silencing the gene. This knocks out the proteins that keep cancer cells immortal.
In a study of mice, the nontoxic drug was delivered by intravenous injection. In animals with GBM, the survival rate increased nearly 20 percent, and tumor size was reduced three to four fold, as compared to the control group. The results are published today (Oct. 30) in Science Translational Medicine.
“This is a beautiful marriage of a new technology with the genes of a terrible disease,” said Chad A. Mirkin, a nanomedicine expert and a senior co-author of the study. “Using highly adaptable spherical nucleic acids, we specifically targeted a gene associated with GBM and turned it off in vivo. This proof-of-concept further establishes a broad platform for treating a wide range of diseases, from lung and colon cancers to rheumatoid arthritis and psoriasis.”
Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering.
Glioblastoma expert Alexander H. Stegh came to Northwestern University in 2009, attracted by the University’s reputation for interdisciplinary research, and within weeks was paired up with Mirkin to tackle the difficult problem of developing better treatments for glioblastoma.
Help is critical for patients with GBM: The median survival rate is 14 to 16 months, and approximately 16,000 new cases are reported in the U.S. every year.
In their research partnership, Mirkin had the perfect tool to tackle the deadly cancer: spherical nucleic acids (SNAs), new globular forms of DNA and RNA, which he had invented at Northwestern in 1996, and which are nontoxic to humans. The nucleic acid sequence is designed to match the target gene.
And Stegh had the gene: In 2007, he and colleagues identified the gene Bcl2Like12 as one that is overexpressed in glioblastoma tumors and related to glioblastoma’s resistance to conventional therapies.
“My research group is working to uncover the secrets of cancer and, more importantly, how to stop it,” said Stegh, a senior co-author of the study. “Glioblastoma is a very challenging cancer, and most chemo-therapeutic drugs fail in the clinic. The beauty of the gene we silenced in this study is that it plays many different roles in therapy resistance. Taking the gene out of the picture should allow conventional therapies to be more effective.”
Stegh is an assistant professor in the Ken and Ruth Davee Department of Neurology at the Northwestern University Feinberg School of Medicine and an investigator in the Northwestern Brain Tumor Institute.
The power of gene regulation technology is that a disease with a genetic basis can be attacked and treated if scientists have the right tools. Thanks to the Human Genome Project and genomics research over the last two decades, there is an enormous number of genetic targets; having the right therapeutic agents and delivery materials has been the challenge.
“The RNA interfering-based SNAs are a completely novel approach in thinking about cancer therapy,” Stegh said. “One of the problems is that we have large lists of genes that are somehow disregulated in glioblastoma, but we have absolutely no way of targeting all of them using standard pharmacological approaches. That’s where we think nanomaterials can play a fundamental role in allowing us to implement the concept of personalized medicine in cancer therapy.”
Stegh and Mirkin’s drug for GBM is specially designed to target the Bcl2Like12 gene in cancer cells. Key is the nanostructure’s spherical shape and nucleic acid density. Normal (linear) nucleic acids cannot get into cells, but these spherical nucleic acids can. Small interfering RNA (siRNA) surrounds a gold nanoparticle like a shell; the nucleic acids are highly oriented, densely packed and form a tiny sphere. (The gold nanoparticle core is only 13 nanometers in diameter.) The RNA’s sequence is programmed to silence the disease-causing gene.
“The problems posed by glioblastoma and many other diseases are simply too big for one research group to handle,” said Mirkin, who also is the director of Northwestern’s International Institute for Nanotechnology. “This work highlights the power of scientists and engineers from different fields coming together to address a difficult medical issue.”
Mirkin first developed the nanostructure platform used in this study in 1996 at Northwestern, and the technology now is the basis of powerful commercialized and FDA-cleared medical diagnostic tools. This new development, however, is the first realization that the nanostructures injected into an animal naturally find their target in the brain and can deliver an effective payload of therapeutics.
The next step for the therapeutic will be to test it in clinical trials.
The nanostructures used in this study were developed in Mirkin’s lab on the Evanston campus and then used in cell and animal studies in Stegh’s lab on the Chicago campus.
High blood-sugar levels, such as those linked with Type 2 diabetes, make beta amyloid protein associated with Alzheimer’s disease dramatically more toxic to cells lining blood vessels in the brain, according to a new Tulane University study published in latest issue of the Journal of Alzheimer’s Disease.
The study supports growing evidence pointing to glucose levels and vascular damage as contributors to dementia.
“Previously, it was believed that Alzheimer’s disease was due to the accumulation of ‘tangles’ in neurons in the brain from overproduction and reduced removal of beta amyloid protein,” said senior investigator Dr. David Busija, regents professor and chair of pharmacology at Tulane University School of Medicine. “While neuronal involvement is a major factor in Alzheimer’s development, recent evidence indicates damaged cerebral blood vessels compromised by high blood sugar play a role. Even though the links among Type 2 diabetes, brain blood vessels and Alzheimer’s progression are unclear, hyperglycemia appears to play a role.”
Drs. Cristina Carvalho and Paula Moreira from the University of Coimbra in Portugal were co-investigators in the study.
Researchers studied cell cultures taken from the lining of cerebral blood vessels, one from normal rats and another from mice with uncontrolled chronic diabetes. They exposed the cells to beta amyloid and different levels of glucose and later measured their viability. Cells exposed to high glucose or beta amyloid alone showed no changes in viability. However, when exposed to hyperglycemic conditions and beta amyloid, viability decreased by 40 percent. Researchers suspect the damage is due to oxidative stress from the mitochondria of the cell.
The cells from diabetic mice were more susceptible to damage and death to beta amyloid protein − even at normal glucose levels. The increased toxicity of beta amyloid may damage the blood-brain barrier, disrupt normal blood flow to the brain and decrease clearance of beta amyloid protein.
The study’s findings underscore the need to aggressively control blood sugar levels in diabetic individuals, Busija said.
Light enhances brain activity during a cognitive task even in some people who are totally blind, according to a study conducted by researchers at the University of Montreal and Boston’s Brigham and Women’s Hospital. The findings contribute to scientists’ understanding of everyone’s brains, as they also revealed how quickly light impacts on cognition. “We were stunned to discover that the brain still respond significantly to light in these rare three completely blind patients despite having absolutely no conscious vision at all,” said senior co-author Steven Lockley. “Light doesn’t just allow us to see, it tells the brain whether it’s night or day which in –turn ensures that our physiology, metabolism and behavior are synchronized with environmental time”. “For diurnal species like ours, light stimulates day-like brain activity, improving alertness and mood, and enhancing performance on many cognitive tasks,” explained senior co-author Julie Carrier. The results indicate that their brains can still “see”, or detect, light via a novel photoreceptor in the ganglion cell layer of the retina, different from the rods and cones we use to see.

Scientists believe, however, that these specialized photoreceptors in the retina also contribute to visual function in the brain even when cells in the retina responsible for normal image formation have lost their ability to receive or process light. A previous study in a single blind patient suggested that this was possible but the research team wanted to confirm this result in different patients. To test this hypothesis, the three participants were asked to say whether a blue light was on or off, even though they could not see the light. “We found that the participants did indeed have a non-conscious awareness of the light – they were able to determine correctly when the light was on greater than chance without being able to see it,” explained first author Gilles Vandewalle.
The next steps involved looking closely at what happened to brain activation when light was flashed at their eyes at the same time as their attentiveness to a sound was monitored. “The objective of this second test was to determine whether the light affected the brain patterns associated with attentiveness – and it did,” said first author Olivier Collignon.
Finally, the participants underwent a functional MRI brain scan as they performed a simple sound matching task while lights were flashed in their eyes. “The fMRI further showed that during an auditory working memory task, less than a minute of blue light activated brain regions important to perform the task. These regions are involved in alertness and cognition regulation as well being as key areas of the default mode network,” Vandewalle explained. Researchers believe that the default network is linked to keeping a minimal amount of resources available for monitoring the environment when we are not actively doing something. “If our understanding of the default network is correct, our results raise the intriguing possibility that light is key to maintaining sustained attention” agreed Lockley and Carrier. “This theory may explain why the brain’s performance is improved when light is present during tasks.”
TAU researchers identify specific molecules that could be targeted to treat the disorder

Plaques and tangles made of proteins are believed to contribute to the debilitating progression of Alzheimer’s disease. But proteins also play a positive role in important brain functions, like cell-to-cell communication and immunological response. Molecules called microRNAs regulate both good and bad protein levels in the brain, binding to messenger RNAs to prevent them from developing into proteins.
Now, Dr. Boaz Barak and a team of researchers in the lab of Prof. Uri Ashery of Tel Aviv University’s Department of Neurobiology at the George S. Wise Faculty of Life Sciences and the Sagol School of Neuroscience have identified a specific set of microRNAs that detrimentally regulate protein levels in the brains of mice with Alzheimer’s disease and beneficially regulate protein levels in the brains of other mice living in a stimulating environment.
"We were able to create two lists of microRNAs — those that contribute to brain performance and those that detract — depending on their levels in the brain," says Dr. Barak. "By targeting these molecules, we hope to move closer toward earlier detection and better treatment of Alzheimer’s disease."
Prof. Daniel Michaelson of TAU’s Department of Neurobiology in the George S. Wise Faculty of Life Sciences and the Sagol School of Neuroscience, Dr. Noam Shomron of TAU’s Department of Cell and Developmental Biology and Sagol School of Neuroscience, Dr. Eitan Okun of Bar-Ilan University, and Dr. Mark Mattson of the National Institute on Aging collaborated on the study, published in Translational Psychiatry.
A double-edged sword
Alzheimer’s disease is the most common form of dementia. Currently incurable, it increasingly impairs brain function over time, ultimately leading to death. The TAU researchers became interested in the disease while studying the brains of mice living in an “enriched environment” — an enlarged cage with running wheels, bedding and nesting material, a house, and frequently changing toys. Such environments have been shown to improve and maintain brain function in animals much as intellectual activity and physical fitness do in people.
The researchers ran a series of tests on a part of the mice’s brains called the hippocampus, which plays a major role in memory and spatial navigation and is one of the earliest targets of Alzheimer’s disease in humans. They found that, compared to mice in normal cages, the mice from the enriched environment developed higher levels of good proteins and lower levels of bad proteins. Then, for the first time, they identified the microRNAs responsible for regulating the expression of both good and bad proteins.
Armed with this new information, the researchers analyzed changes in the levels of microRNAs in the hippocampi of young, middle-aged, and old mice with an Alzheimer’s-disease-like condition. They found that some of the microRNAs were expressed in exactly inverse amounts in mice with Alzheimer’s disease as they were in mice from the enriched environment. The results were higher levels of bad proteins and lower levels of good proteins in the hippocampi of old mice with Alzheimer’s disease. The microRNAs the researchers identified had already been shown or predicted to regulate the expression of proteins in ways that contributed to Alzheimer’s disease. Their finding that the microRNAs are inversely regulated in mice from the enriched environment is important, because it suggests the molecules can be targeted by activities or drugs to preserve brain function.
Brain-busting potential
Two findings appear to have particular potential for treating people with Alzheimer’s disease. In the brains of old mice with the disease, microRNA-325 was diminished, leading to higher levels of tomosyn, a protein that is well known to inhibit cellular communication in the brain. The researchers hope that eventually microRNA-325 can be used to create a drug to help Alzheimer’s patients maintain low levels of tomosyn and preserve brain function. Additionally, the researchers found several important microRNAs at low levels starting in the brains of young mice. If the same can be found in humans, these microRNAs could be used as biomarker to detect Alzheimer’s disease at a much earlier age than is now possible — at 30 years of age, for example, instead of 60.
"Our biggest hope is to be able to one day use microRNAs to detect Alzheimer’s disease in people at a young age and begin a tailor-made treatment based on our findings, right away," says Dr. Barak.
Was the evolution of high-quality vision in our ancestors driven by the threat of snakes? Work by neuroscientists in Japan and Brazil is supporting the theory originally put forward by Lynne Isbell, professor of anthropology at the University of California, Davis.

In a paper published Oct. 28 in the journal Proceedings of the National Academy of Sciences, Isbell; Hisao Nishijo and Quan Van Le at Toyama University, Japan; and Rafael Maior and Carlos Tomaz at the University of Brasilia, Brazil; and colleagues show that there are specific nerve cells in the brains of rhesus macaque monkeys that respond to images of snakes.
The snake-sensitive neurons were more numerous, and responded more strongly and rapidly, than other nerve cells that fired in response to images of macaque faces or hands, or to geometric shapes. Isbell said she was surprised that more neurons responded to snakes than to faces, given that primates are highly social animals.
"We’re finding results consistent with the idea that snakes have exerted strong selective pressure on primates," Isbell said.
Isbell originally published her hypothesis in 2006, following up with a book, “The Fruit, the Tree and the Serpent” (Harvard University Press, 2009) in which she argued that our primate ancestors evolved good, close-range vision primarily to spot and avoid dangerous snakes.
Modern mammals and snakes big enough to eat them evolved at about the same time, 100 million years ago. Venomous snakes are thought to have appeared about 60 million years ago — “ambush predators” that have shared the trees and grasslands with primates.
Nishijo’s laboratory studies the neural mechanisms responsible for emotion and fear in rhesus macaque monkeys, especially instinctive responses that occur without learning or memory. Previous researchers have used snakes to provoke fear in monkeys, he noted. When Nishijo heard of Isbell’s theory, he thought it might explain why monkeys are so afraid of snakes.
"The results show that the brain has special neural circuits to detect snakes, and this suggests that the neural circuits to detect snakes have been genetically encoded," Nishijo said.
The monkeys tested in the experiment were reared in a walled colony and neither had previously encountered a real snake.
"I don’t see another way to explain the sensitivity of these neurons to snakes except through an evolutionary path," Isbell said.
Isbell said she’s pleased to be able to collaborate with neuroscientists.
"I don’t do neuroscience and they don’t do evolution, but we can put our brains together and I think it brings a wider perspective to neuroscience and new insights for evolution," she said.