The hippocampus is a small structure in the brains of mammals that plays a crucial role in processing input from our senses and allows perceptions to be stored as memories. Nerve cells that inhibit the activity of other cells have now been shown to play a much larger and more complex role in these processes than previously assumed. Teams led by Prof. Dr. Marlene Bartos from the Cluster of Excellence BrainLinks-BrainTools at the University of Freiburg and Prof. Dr. Imre Vida from the Cluster of Excellence NeuroCure at the hospital Charité in Berlin report these findings in the current issue of the Journal of Neuroscience.

(Image caption: Three different cell types in the hippocampus (BC, HCP, and HIPP) were previously known to have different morphologies (top). New research shows that they respond to electrical stimulation (black traces) by inhibiting other nerve cells in very different patterns (bottom), allowing for more powerful information processing. Credit: BrainLinks-BrainTools)
In their study, the scientists investigated how special types of so-called interneurons build connections with each other within the hippocampus and how their function influences the network of nerve cells as a whole. Interneurons do not prompt other nerve cells to become active but, on the contrary, inhibit them. This kind of suppression plays an important role in brain activity in general. Information processing would not be possible otherwise, because a brain in which all nerve cells are active at the same time is effectively put out of order.
The hippocampus is home to a variety of different inhibitory cells, which were known so far to differ greatly in their form and function. But up to now it has been generally assumed that their actual influence on the activity of the brain structure they belong to is rather small. By combining several different experimental methods, Bartos, Vida, and their teams succeeded in showing that these cells are actually able to strongly interfere with the activity and the timing of activity patterns within the hippocampus. Moreover, the various possible combinations of connections between these different cell types show markedly different characteristics in their function. This makes the inhibition within the hippocampus much more flexible and versatile than previously assumed. The team of scientists suspects that this also makes the capability to process information within the hippocampus much bigger. The results published in this study are from experiments conducted in acute slice preparations of the hippocampus. Up next for the researchers will be the task of verifying these results within the actual brain.
A study group at the Medical University of Vienna’s Centre for Brain Research has investigated the function of an intracellular dopamine pump in Parkinson’s patients compared to a healthy test group. It turned out that this pump is less effective at pumping out dopamine and storing it in the brain cells of Parkinson’s sufferers. If dopamine is not stored correctly, however, it can cause self-destruction of the affected nerve cells.

In the brain, dopamine mediates the exchange of information between different neurons and, to help it do this, it is continuously reformed at the contact points between the corresponding nerve cells. It is stored in structures known as vesicles (intracellular bubbles) and it is released when required. In people with Parkinson’s disease, the death of these nerve cells causes a lack of dopamine, and this in turn causes the familiar movement problems such as motor retardation, stiffness of the muscles and tremors.
More than 50 years ago, in the Institute of Pharmacology at the University of Vienna (now the MedUni Vienna), Herbert Ehringer and Oleh Hornykiewicz discovered that Parkinson’s disease is caused by a lack of dopamine in certain regions of the brain. This discovery enabled Hornykiewicz to introduce the amino acid L-DOPA into the treatment of Parkinson’s to substitute the dopamine and make the symptoms of the condition manageable for years.
The reasons for the death of nerve cells in Parkinson’s disease are not yet fully understood, however, which is why it is still not possible to prevent the disease from developing. Nevertheless, dopamine itself, if it is not stored correctly in vesicles, can cause self-destruction of the affected nerve cells.
Now, a further step forward has been taken in the research into the causes of this disease: a study at the MedUni Vienna’s Centre for Brain Research, led by Christian Pifl and the now 87-year-old Oleh Hornykiewicz, compared the brains of deceased Parkinson’s patients with those of a neurologically healthy control group. For the first time, it was possible to prepare the dopamine-storing vesicles from the brains so that their ability to store dopamine by pumping it in could be measured in quantitative terms.
It turned out that the pumps in the vesicles of Parkinson’s sufferers pumped the dopamine out less efficiently. “This pump deficiency and the associated reduction in dopamine storage capacity of the Parkinson’s vesicles could lead to dopamine collecting in the nerve cells, developing its toxic effect and destroying the nerve cells,” explains Christian Pifl.
The amygdala is a key “fear center” in the brain. Alterations in the development of the amygdala during childhood may have an important influence on the development of anxiety problems, reports a new study in the current issue of Biological Psychiatry.

Researchers at the Stanford University School of Medicine recruited 76 children, 7 to 9 years of age, a period when anxiety-related traits and symptoms can first be reliably identified. The children’s parents completed assessments designed to measure the anxiety levels of the children, and the children then underwent non-invasive magnetic resonance imaging (MRI) scans of brain structure and function.
The researchers found that children with high levels of anxiety had enlarged amygdala volume and increased connectivity with other brain regions responsible for attention, emotion perception, and regulation, compared to children with low levels of anxiety. They also developed an equation that reliably predicted the children’s anxiety level from the MRI measurements of amygdala volume and amygdala functional connectivity.
The most affected region was the basolateral portion of the amygdala, a subregion of the amygdala implicated in fear learning and the processing of emotion-related information.
“It is a bit surprising that alterations to the structure and connectivity of the amygdala were so significant in children with higher levels of anxiety, given both the young age of the children and the fact that their anxiety levels were too low to be observed clinically,” commented Dr. Shaozheng Qin, first author on this study.
Dr. John Krystal, Editor of Biological Psychiatry, commented, “It is critical that we move from these interesting cross-sectional observations to longitudinal studies, so that we can separate the extent to which larger and better connected amygdalae are risk factors or consequences of increased childhood anxiety.”
“However, our study represents an important step in characterizing altered brain systems and developing predictive biomarkers in the identification for young children at risk for anxiety disorders,” Qin added. “Understanding the influence of childhood anxiety on specific amygdala circuits, as identified in our study, will provide important new insights into the neurodevelopmental origins of anxiety in humans.”
Orexin proteins, which are blamed for spontaneous daytime sleepiness, also play a crucial role in bone formation, according to findings by UT Southwestern Medical Center researchers. The findings could potentially give rise to new treatments for osteoporosis, the researchers say.

Orexins are a type of protein used by nerve cells to communicate with each other. Since their discovery at UT Southwestern more than 15 years ago, they have been found to regulate a number of behaviors, including arousal, appetite, reward, energy expenditure, and wakefulness. Orexin deficiency, for example, causes narcolepsy – spontaneous daytime sleepiness. Thus, orexin antagonists are promising treatments for insomnia, some of which have been tested in Phase III clinical trials.
UT Southwestern researchers, working with colleagues in Japan, now have found that mice lacking orexins also have very thin and fragile bones that break easily because they have fewer cells called osteoblasts, which are responsible for building bones.
“Osteoporosis is highly prevalent, especially among post-menopausal women. We are hoping that we might be able to take advantage of the already available orexin-targeting small molecules to potentially treat osteoporosis,” said Dr. Yihong Wan, Assistant Professor of Pharmacology, the Virginia Murchison Linthicum Scholar in Medical Research, and senior author for the study, published in the journal Cell Metabolism.
Osteoporosis, the most common type of bone disease in which bones become fragile and susceptible to fracture, affects more than 10 million Americans. The disease, which disproportionately affects seniors and women, leads to more than 1.5 million fractures and some 40,000 deaths annually. In addition, the negative effects impact productivity, mental health, and quality of life. One in five people with hip fractures, for example, end up in nursing homes.
Orexins seem to play a dual role in the process: they both promote and block bone formation. On the bones themselves, orexins interact with another protein, orexin receptor 1 (OX1R), which decreases the levels of the hunger hormone ghrelin. This slows down the production of new osteoblasts and, therefore, blocks bone formation locally. At the same time, orexins interact with orexin receptor 2 (OX2R) in the brain. In this case, the interaction reduces the circulating levels of leptin, a hormone known to decrease bone mass, and thereby promotes bone formation. Therefore, osteoporosis prevention and treatment may be achieved by either inhibiting OX1R or activating OX2R.
“We were very intrigued by this yin-yang-style dual regulation,” said Dr. Wan, a member of the Cecil H. and Ida Green Center for Reproductive Biology Sciences and UT Southwestern’s Harold C. Simmons Comprehensive Cancer Center. “It is remarkable that orexins manage to regulate bone formation by using two different receptors located in two different tissues.”
The central nervous system regulation through OX2R, and therefore promotion of bone formation, was actually dominant over regulation through OX1R. So when the group examined mice lacking both OX1R and OX2R, they had very fragile bones with decreased bone formation. Similarly, when they assessed mice that expressed high levels of orexins, those mice had increased numbers of osteoblasts and enhanced bone formation.
In a remarkable series of experiments on a fungus that causes cryptococcal meningitis, a deadly infection of the membranes that cover the spinal cord and brain, investigators at UC Davis have isolated a protein that appears to be responsible for the fungus’ ability to cross from the bloodstream into the brain.

The discovery — published online June 3 in mBio, the open-access, peer-reviewed journal of the American Society for Microbiology — has important implications for developing a more effective treatment for Cryptococcus neoformans, the cause of the condition, and other brain infections, as well as for brain cancers that are difficult to treat with conventional medications.
“This study fills a significant gap in our understanding of how C. neoformans crosses the blood-brain barrier and causes meningitis,” said Angie Gelli, associate professor of pharmacology at UC Davis and principal investigator of the study. “It is our hope that our findings will lead to improved treatment for this fungal disease as well as other diseases of the central nervous system.”
Normally the brain is protected from bacterial, viral and fungal pathogens in the bloodstream by a tightly packed layer of endothelial cells lining capillaries within the central nervous system — the so-called blood-brain barrier. Relatively few organisms — and drugs that could fight brain infections or cancers — can breach this protective barrier.
The fungus studied in this research causes cryptococcal meningoencephalitis, a usually fatal brain infection that annually affects some 1 million people worldwide, most often those with an impaired immune system. People typically first develop an infection in the lungs after inhalation of the fungal spores of C. neoformans in soil or pigeon droppings. The pathogen then spreads to the brain and other organs.
Unique protein identified
In an effort to discover how C. neoformans breaches the blood-brain barrier, the investigators isolated candidate proteins from the cryptococcal cell surface. One was a previously uncharacterized metalloprotease that they named Mpr1. (A protease is an enzyme — a specialized protein — that promotes a chemical reaction; a metalloprotease contains a metal ion — in this case zinc — that is essential for its activity.) The M36 class of metalloproteases to which Mpr1 belongs is unique to fungi and does not occur in mammalian cells.
The investigators next artificially generated a strain of C. neoformans that lacked Mpr1 on the cell surface. Unlike the normal wild-type C. neoformans, the strain without Mpr1 could not cross an artificial model of the human blood-brain barrier.
They next took a strain of common baking yeast — Saccharomyces cerevisiae — that does not cross the blood-brain barrier and does not normally express Mpr1, and modified it to express Mpr1 on its cell surface. This strain then gained the ability to cross the blood-brain barrier model.
Investigators then infected mice with either the C. neoformans that lacked Mpr1 or the wild-type strain by injecting the organisms into their bloodstream. Comparing the brain pathology of mice 48 hours later, they found numerous cryptococci-filled cysts throughout the brain tissue of mice infected with the wild-type strain; these lesions were undetectable in those infected with the strain lacking Mpr1. In another experiment, after 37 days of being infected by the inhalation route, 85 percent of the mice exposed to the wild-type C. neoformans had died, while all of those given the fungus without Mpr1 were alive.
“Our studies are the first clear demonstration of a specific role for a fungal protease in invading the central nervous system,” said Gelli. “The details of exactly how it crosses is an important new area under investigation.”
New targeted therapies possible
According to Gelli, their discovery has significant therapeutic potential via two important mechanisms. Either Mpr1 — or an aspect of the mechanism by which it crosses the blood-brain barrier — could be a target of new drugs for treating meningitis caused by C. neoformans. In a person who develops cryptococcal lung infection, such a treatment would ideally make the fungus less likely to enter the brain and lead to a rapidly fatal meningitis.
Secondly, Mpr1 could be developed as part of a drug-delivery vehicle for brain infections and cancers. An antibiotic or cancer-fighting drug that is unable to cross the blood-brain barrier on its own could be attached to a nanoparticle containing Mpr1, allowing it to hitch a ride and deliver its goods to where it is needed.
“The biggest obstacle to treating many brain cancers and infections is getting good drugs through the blood-brain barrier,” said Gelli. “If we could design an effective delivery system into the brain, the impact would be enormous for treating some of these terrible diseases.”
Gelli’s group is currently pursuing such a nanoparticle drug-delivery system using Mpr1. They are also further investigating the exact molecular mechanism by which Mpr1 breaches the blood-brain barrier.
In a new study, scientists at the National Institutes of Health took a molecular-level journey into microtubules, the hollow cylinders inside brain cells that act as skeletons and internal highways. They watched how a protein called tubulin acetyltransferase (TAT) labels the inside of microtubules. The results, published in Cell, answer long-standing questions about how TAT tagging works and offer clues as to why it is important for brain health.

(Image caption: NIH scientists watched the inside of brain cell tubes, called microtubules, get tagged by a protein called TAT. Tagging is a critical process in the health and development of nerve cells. Credit: Courtesy of the Roll-Mecak lab, NINDS, Bethesda, MD)
Microtubules are constantly tagged by proteins in the cell to designate them for specialized functions, in the same way that roads are labeled for fast or slow traffic or for maintenance. TAT coats specific locations inside the microtubules with a chemical called an acetyl group. How the various labels are added to the cellular microtubule network remains a mystery. Recent findings suggested that problems with tagging microtubules may lead to some forms of cancer and nervous system disorders, including Alzheimer’s disease, and have been linked to a rare blinding disorder and Joubert Syndrome, an uncommon brain development disorder.
“This is the first time anyone has been able to peer inside microtubules and catch TAT in action,” said Antonina Roll-Mecak, Ph.D., an investigator at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS), Bethesda, Maryland, and the leader of the study.
Microtubules are found in all of the body’s cells. They are assembled like building blocks, using a protein called tubulin. Microtubules are constructed first by aligning tubulin building blocks into long strings. Then the strings align themselves side by side to form a sheet. Eventually the sheet grows wide enough that it closes up into a cylinder. TAT then bonds an acetyl group to alpha tubulin, a subunit of the tubulin protein.
Some microtubules are short-lived and can rapidly change lengths by adding or removing tubulin pieces along one end, whereas others remain unchanged for longer times. Recognizing the difference may help cells function properly. For example, cells may send cargo along stable microtubules and avoid ones that are being rebuilt. Cells appear to use a variety of chemical labels to describe the stability of microtubules.
“Our study uncovers how TAT may help cells distinguish between stable microtubules and ones that are under construction,” said Dr. Roll-Mecak. According to Dr. Roll-Mecak, high levels of microtubule tagging are unique to nerve cells and may be the reason that they have complex shapes allowing them to make elaborate connections in the brain.
For decades scientists knew that the insides of long-lived microtubules were often tagged with acetyl groups by TAT. Changes in acetylation may influence the health of nerve cells. Some studies have shown that blocking this form of microtubule tagging leads to nerve defects, brain abnormalities or degeneration of nerve fibers. Since the discovery of microtubule acetylation, scientists have been puzzled about how TAT accesses the inside of the microtubules and how the tagging reaction happens.
To watch TAT at work, Dr. Roll-Mecak and her colleagues took high resolution movies of individual TAT molecules interacting with microtubules in real time. They saw that TAT surfs through the inside of microtubules and although it can find acetylation sites quickly, the process of adding the tag occurs very slowly.
In general, tagging reactions work like keys fitting into locks: the better the key fits, the faster the lock can open. Similarly, the rate of the reactions is determined by how well TAT molecules fit around tagging sites.
Dr. Roll-Mecak’s team investigated this idea by using a technique called X-ray crystallography to look at how atoms on TAT molecules interact with acetylation sites on tubulin molecules. Their results suggested that TAT fit poorly around the sites.
“It looks as though TAT can easily journey through microtubules spotting acetylation sites but may only label those that are stable for longer periods of time,” said Dr. Roll-Mecak.
This may help cells identify the microtubules they need to rapidly change shapes or send cargo to other places. Further studies may help researchers understand how microtubule tagging influences nerve cells in health and disease.
Researchers have determined that a copper compound known for decades may form the basis for a therapy for amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease.
In a new study just published in the Journal of Neuroscience, scientists from Australia, the United States (Oregon), and the United Kingdom showed in laboratory animal tests that oral intake of this compound significantly extended the lifespan and improved the locomotor function of transgenic mice that are genetically engineered to develop this debilitating and terminal disease.
In humans, no therapy for ALS has ever been discovered that could extend lifespan more than a few additional months. Researchers in the Linus Pauling Institute at Oregon State University say this approach has the potential to change that, and may have value against Parkinson’s disease as well.
“We believe that with further improvements, and following necessary human clinical trials for safety and efficacy, this could provide a valuable new therapy for ALS and perhaps Parkinson’s disease,” said Joseph Beckman, a distinguished professor of biochemistry and biophysics in the OSU College of Science.
“I’m very optimistic,” said Beckman, who received the 2012 Discovery Award from the OHSU Medical Research Foundation as the leading medical researcher in Oregon.
ALS was first identified as a progressive and fatal neurodegenerative disease in the late 1800s and gained international recognition in 1939 when it was diagnosed in American baseball legend Lou Gehrig. It’s known to be caused by motor neurons in the spinal cord deteriorating and dying, and has been traced to mutations in copper, zinc superoxide dismutase, or SOD1. Ordinarily, superoxide dismutase is an antioxidant whose proper function is essential to life.
When SOD1 is lacking its metal co-factors, it “unfolds” and becomes toxic, leading to the death of motor neurons. The metals copper and zinc are important in stabilizing this protein, and can help it remain folded more than 200 years.
“The damage from ALS is happening primarily in the spinal cord and that’s also one of the most difficult places in the body to absorb copper,” Beckman said. “Copper itself is necessary but can be toxic, so its levels are tightly controlled in the body. The therapy we’re working toward delivers copper selectively into the cells in the spinal cord that actually need it. Otherwise, the compound keeps copper inert.”
“This is a safe way to deliver a micronutrient like copper exactly where it is needed,” Beckman said.
By restoring a proper balance of copper into the brain and spinal cord, scientists believe they are stabilizing the superoxide dismutase in its mature form, while improving the function of mitochondria. This has already extended the lifespan of affected mice by 26 percent, and with continued research the scientists hope to achieve even more extension.
The compound that does this is called copper (ATSM), has been studied for use in some cancer treatments, and is relatively inexpensive to produce.
“In this case, the result was just the opposite of what one might have expected,” said Blaine Roberts, lead author on the study and a research fellow at the University of Melbourne, who received his doctorate at OSU working with Beckman.
“The treatment increased the amount of mutant SOD, and by accepted dogma this means the animals should get worse,” he said. “But in this case, they got a lot better. This is because we’re making a targeted delivery of copper just to the cells that need it.
“This study opens up a previously neglected avenue for new disease therapies, for ALS and other neurodegenerative disease,” Roberts said.
A new study from Karolinska Institutet shows that a part of the nervous system, the parasympathetic nervous system, is formed in a way that is different from what researchers previously believed. In this study, which is published in the journal Science, a new phenomenon is investigated within the field of developmental biology, and the findings may lead to new medical treatments for congenital disorders of the nervous system.

Almost all of the body’s functions are controlled by the autonomous, involuntary nervous system, for example the heart and blood vessels, liver and gastrointestinal system. At rest, the body is set up for energy saving functions, which is regulated by the parasympathetic part of the autonomous nervous system.
Current understanding is that many important types of cells, including the parasympathetic nerve cells in various organs, originate in early progenitor cells that move short distances while the embryo is still small. But this model does not explain how many of our organs – which develop relatively late, when the embryo is large – are furnished with cells that form the parasympathetic neurons.
This study alters a fundamental principal of our understanding of how the peripheral nervous system develops in the body. Researchers at Karolinska Institutet have made three-dimensional reconstructions of mouse embryos. These show that the parasympathetic neurons are formed from immature glial cells known as Schwann cell precursors that travel along the peripheral nerves out to the body’s tissues and organs. The immature cells have the properties of stem cells and may be the origin of several different types of cells. For example, the researchers behind this new study have previously demonstrated that the majority of our melanocytes (pigment cells) are born from these cells.
New principal of developmental biology
"Our study focuses on a new principal of developmental biology, a targeted recruitment of cells that are probably also used in the reconstruction of tissue. Despite the elegance, simplicity and beauty of this principal, it is still unclear how the number of parasympathetic neurons is controlled and why only some of the cells transported by nerves are transformed into that which becomes an important part of the nervous system", says Igor Adameyko at the Department of Physiology and Pharmacology who, together with Patrik Ernfors at the Department of Medical Biochemistry and Biophysics, is responsible for the study.
Somewhat surprisingly, the researchers found that the entire parasympathetic nervous system arises from these progenitor cells that travel along the peripheral nerves. The researchers hope that this discovery will open up the possibility of new ways to treat congenital disorders of the autonomous nervous system using regenerative medicine.
In a new study published online in the Journal of the American Heart Association June 12, 2014, researchers at Columbia Engineering report that they have identified a new component of the biological mechanism that controls blood flow in the brain. Led by Elizabeth M. C. Hillman, associate professor of biomedical engineering, the team has demonstrated, for the first time, that the vascular endothelium plays a critical role in the regulation of blood flow in response to stimulation in the living brain.

(Image caption: In-vivo two-photon microscopy image of endothelial cells lining surface arteries in the brain (green, TIE-2/GFP). Red cells are astrocytes labeled with sulphorhodamine. New results suggest that the continuous pathway of endothelial cells within the brain’s arteries is essential for propagating signals that orchestrate local dilation and increases in blood flow in response to local neuronal activity. Credit: Image courtesy of Elizabeth Hillman)
“We think we’ve found a missing link in our understanding of how the brain dynamically tunes its blood flow to stay in sync with the activity of neurons,” says Hillman, who has a joint appointment in Radiology. She is also a member of the Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science at Columbia. Hillman has spent more than 10 years using advanced imaging tools to study how blood flow is controlled in the brain. “Earlier studies identified small pieces of the puzzle, but we didn’t believe they formed a cohesive ‘big picture’ that unified everybody’s observations. Our new finding seems to really connect the dots.”
Understanding how and why the brain regulates its blood flow could provide important clues to understanding early brain development, disease, and aging. The brain increases local blood flow when neurons fire, and this increase is what is detected by a functional magnetic resonance imaging (fMRI) scan. Hillman found that the vascular endothelium, the inner layer of blood vessels, plays a critical role in propagating and shaping the blood flow response to local neuronal activity. While the vascular endothelium is known to do this in other areas of the body, until now the brain was thought to use a different, more specialized mechanism and researchers in the field were focused on the cells surrounding the vessels in the brain.
“Once we realized the importance of endothelial signaling in the regulation of blood flow in the brain,” Hillman adds, “we wondered whether overlooking the vascular endothelium might have led researchers to misinterpret their results.”
“As we identified this pathway, so many things fell into place,” she continues, “We really hope that our work will encourage others to take a closer look at the vascular endothelium in the brain. So far, we think that our findings have far-reaching and really exciting implications for neuroscience, neurology, cardiovascular medicine, radiology, and our overall understanding of how the brain works.”
This research was carried out in Hillman’s Laboratory for Functional Optical Imaging, led by PhD student and lead author on the study, Brenda Chen. Other lab members who assisted with the study included PhD and MD/PhD students from Columbia Engineering, Neurobiology and Behavior, and Columbia University Medical Center. The group combined their engineering skills with their expertise in neuroscience, biology, and medicine to understand this new aspect of brain physiology.
To tease apart the role of endothelial signaling in the living brain, they had to develop new ways to both image the brain at very high speeds, and also to selectively alter the ability of endothelial cells to propagate signals within intact vessels. The team achieved this through a range of techniques that use light and optics, including imaging using a high-speed camera with synchronized, strobed LED illumination to capture changes in the color, and thus the oxygenation level of flowing blood. Focused laser light was used in combination with a fluorescent dye within the bloodstream to cause oxidative damage to the inner endothelial layer of blood brain arterioles, while leaving the rest of the vessel intact and responsive. The team showed that, after damaging a small section of a vessel using their laser, the vessel no longer dilated beyond the damaged point. When the endothelium of a larger number of vessels was targeted in the same way, the overall blood flow response of the brain to stimulation was significantly decreased.
“Our finding unifies what is known about blood flow regulation in the rest of the body with how it is regulated in the brain,” Hillman explains. “This has wider reaching implications since there are many disease states known to affect blood flow regulation in the rest of the body that, until now, were not expected to directly affect brain health.” For instance, involvement of the endothelium might explain neural deficits in diabetics; a clue that could lead to new diagnostics tests and treatments for neurological conditions associated with broader cardiovascular problems.
“Improving our fundamental understanding of how and why the brain regulates its blood flow is key to understanding how and when this mechanism could be altered or broken,” she says. “We think this could extend to studies of early brain development, aging, and diseases such as Alzheimer’s and dementia.”
The team’s research findings may also explain the effects of some drugs on the brain, and on the fMRI response to stimulation, since the vascular endothelium is exposed to chemicals in the bloodstream. “Overall, this work could dramatically improve our ability to interpret fMRI data collected in humans, perhaps making it a better tool for doctors to understand brain disease,” she adds. Hillman’s work in this area is also featured in an upcoming review in the 2014 edition of the Annual Review of Neuroscience, as well as an article in Scientific American MIND (July/August 2014).
Hillman plans next to address the broad range of implications her latest finding may have. She wants to explore the effects of drugs and disease states on the coupling of blood flow to neuronal activity in the brain, and is now starting studies to explore fMRI data from a range of different disease states to see whether she can find signs of neurovascular dysfunction. She is also working on characterizing the co-evolution of neuronal and hemodynamic activity during brain development and is beginning to develop new imaging tools that will enable non-invasive, inexpensive monitoring of brain hemodynamics in infants and children who cannot be imaged within an MRI scanner.
“Our latest finding gives us a new way of thinking about brain disease—that some conditions assumed to be caused by faulty neurons could actually be problems with faulty blood vessels,” Hillman adds. “This gives us a new target to focus on to explore treatments for a wide range of disorders that have, until now, been thought of as impossible to treat. The brain’s vasculature is a critical partner in normal brain function. We hope that we are slowly getting closer to untangling some of the mysteries of the human brain.”
Fighting off illness- rather than the illness itself- causes sleep deprivation and affects memory, a new study has found.
University of Leicester biologist Dr Eamonn Mallon said a common perception is that if you are sick, you sleep more.
But the study, carried out in flies, found that sickness induced insomnia is quite common.

The research has been published in the journal PeerJ.
Dr Mallon said: “Think about when you are sick. Your sleep is disturbed and you’re generally not feeling at your sharpest. Previously work has been carried out showing that being infected leads to exactly these behaviours in fruit flies.
“In this paper we show that it can be the immune system itself that can cause these problems. By turning on the immune system in flies artificially (with no infection present) we reduced how long they slept and how well they performed in a memory test.
“This is an interesting result as these connections between the brain and the immune system have come to the fore recently in medicine. It seems to be because the two systems speak the same chemical language and often cross-talk. Having a model of this in the fly, one of the main systems used in genetic research will be a boost to the field.
“The key message of this study is that the immune response, sleep and memory seem to be intimately linked. Medicine is beginning to study these links between the brain and the immune system in humans. Having an easy to use insect model would be very helpful.”
Traumatic brain injuries from sports, recreational activities, falls or car accidents are the leading cause of death and disability in children and adolescents. While previously it was believed that the window for brain recovery was at most one year after injury, new research from the Center for BrainHealth at The University of Texas at Dallas published online today in the open-access journal Frontiers in Neurology shows cognitive performance can be improved to significant degrees months, and even years, after injury, given targeted brain training.

"The after-effects of concussions and more severe brain injuries can be very different and more detrimental to a developing child or adolescent brain than an adult brain," said Dr. Lori Cook, study author and director of the Center for BrainHealth’s pediatric brain injury programs. "While the brain undergoes spontaneous recovery in the immediate days, weeks, and months following a brain injury, cognitive deficits may continue to evolve months to years after the initial brain insult when the brain is called upon to perform higher-order reasoning and critical thinking tasks."
Twenty adolescents, ages 12-20 who experienced a traumatic brain injury at least six months prior to participating in the research and were demonstrating gist reasoning deficits, or the inability to “get the essence” from dense information, were enrolled in the study. The participants were randomized into two different cognitive training groups – strategy-based gist reasoning training versus fact-based memory training.
Participants completed eight, 45-minute sessions over a one-month period. Researchers compared the effects of the two forms of training on the ability to abstract meaning and recall facts. Testing included pre- and post-training assessments, in which adolescents were asked to read several texts and then craft a high-level summary, drawing upon inferences to transform ideas into novel, generalized statements, and recall important facts.
After training, only the gist-reasoning group showed significant improvement in the ability to abstract meanings – a foundational cognitive skill to everyday life functionality. Additionally, the gist-reasoning-trained group showed significant generalized gains to untrained areas including executive functions of working memory (i.e., holding information in mind for use – such as performing mental addition or subtraction ) and inhibition (i.e., filtering out irrelevant information). The gist-reasoning training group also demonstrated increased memory for facts, even though this skill was not specifically targeted in training.
"These preliminary results are promising in that higher-order cognitive training that focuses on ‘big picture’ thinking improves cognitive performance in ways that matter to everyday life success," said Dr. Cook. "What we found was that training higher-order cognitive skills can have a positive impact on untrained key executive functions as well as lower-level, but also important, processes such as straightforward memory, which is used to remember details. While the study sample was small and a larger trial is needed, the real-life application of this training program is especially important for adolescents who are at a very challenging life-stage when they face major academic and social complexities. These cognitive challenges require reasoning, filtering, focusing, planning, self-regulation, activity management and combating ‘information overload,’ which is one of the chief complaints that teens with concussions express."
This research advances best practices by implicating changes to common treatment schedules for traumatic brain injury and concussion. The ability to achieve cognitive gains through a brain training treatment regimen at chronic stages of brain injury (6 months or longer) supports the need to monitor brain recovery annually and offer treatment when deficits persist or emerge later.
"Brain injuries require routine follow-up monitoring. We need to make sure that optimized brain recovery continues to support later cognitive milestones, and that is especially true in the case of adolescents," said Dr. Sandra Bond Chapman, study author, founder and chief director of the Center for BrainHealth and Dee Wyly Distinguished University Chair at The University of Texas at Dallas. "What’s promising is that no matter the severity of the injury or the amount of time since injury, brain performance improved when teens were taught how to strategically process incoming information in a meaningful way, instead of just focusing on rote memorization."
How neurons are created and integrate with each other is one of biology’s greatest riddles. Researcher Dietmar Schmucker from VIB-KU Leuven unravels a part of the mystery in Science magazine. He describes a mechanism that explains novel aspects of how the wiring of highly branched neurons in the brain works. These new insights into how complex neural networks are formed are very important for understanding and treating neurological diseases.

Neurons, or nerve cells
It is estimated that a person has 100 billion neurons, or nerve cells. These neurons have thin, elongated, highly branched offshoots called dendrites and axons. They are the body’s information and signal processors. The dendrites receive electrical impulses from the other neurons and conduct these to the cell body. The cell body then decides whether stimuli will or will not be transferred to other cells via the axon.
The brain’s wiring is very complex. Although the molecular mechanisms that explain the linear connection between neurons have already been described numerous times, little is as yet known about how the branched wiring works in the brain.
The connections between nerve cells
Prior research by Dietmar Schmucker and his team lead to the identification of the Dscam1 protein in the fruit fly. The neuron can create many different protein variations, or isoforms, from this same protein. The specific set of isoforms that occurs on a neuron’s cell surface determines the neuron’s unique molecular identity and plays an important role in the establishment of accurate connections. In other words, it describes why certain neurons either come into contact with each other or reject each other.
Recent work by Haihuai He and Yoshiaki Kise from Dietmar’s team indicates that different sets of Dscam1 isoforms occur inside one axon, between the newly formed offshoots amongst each other. If this was not the case, then only linear connections could come about between neurons. These results indicate for the first time the significance of why different sets of the same protein variations can occur in one neuron and it could explain mechanistically how this contributes to the complex wiring in our brain.
Clinical impact
Although this research was done with fruit flies, it also provides new insights that help explain the wiring and complex interactions of the human brain and shine a new light on neurological development disorders such as autism. Thorough knowledge of nerve cell creation and their neural interactions is considered essential knowledge for the future possibility of using stem cell therapy as standard treatment for certain nervous system disorders.
Questions
Given that this research can raise many questions, we would like to refer your questions in your report or article to the email address that the VIB has made available for this purpose. All questions regarding this and other medical research can be directed to: patients@vib.be.
Relevant scientific publication
The above-mentioned research was published in the prominent magazine Science.
A recently published clinical study in the Journal of the American Medical Association has answered an urgent question that long puzzled ER pediatricians: Is the drug lorazepam really safer and more effective than diazepam the U.S. Food and Drug Administration-approved medication as first line therapy most often used by emergency room doctors to control major epileptic seizures in children?
The answer to that question based on a double-blind, randomized clinical trial that compared outcomes in 273 seizure patients, about half of whom were given lorazepam is a clear-cut no, said Prashant V. Mahajan, M.D., M.P.H., M.B.A, one of the authors of the study.
The results of our clinical trial were very convincing, and they showed clearly that the two medications are just about equally effective and equally safe when it comes to treating status epilepticus [major epileptic brain seizures in children], Dr. Mahajan said. This is an important step forward for all of us who frequently treat kids in the ER for [epilepsy-related] seizures, since it answers the question about the best medication to use in ending the convulsions and getting these patients back to normal brain functioning.
Describing the brain convulsions that were targeted by the study, its authors pointed out that status epilepticus occurs when an epilepsy-related seizure lasts more than 30 minutes. Such seizures which occur in more than 10,000 U.S. pediatric epilepsy patients every year can cause permanent brain damage or even death, if allowed to persist.
Published in JAMA, the study, Lorazepam vs Diazepam for Pediatric Status Epilepticus: A Randomized Clinical Trial, was designed to test earlier assertions by many clinicians that lorazepam was more effective at controlling pediatric seizures. The study-authors wrote, Potential advantages proposed in some studies of lorazepam include improved effectiveness in terminating convulsions, longer duration of action compared with diazepam, and lower incidence of respiratory depression. Specific pediatric data comparing diazepam with lorazepam suggest that lorazepam might be superior, but they are limited to reports from single institutions or retrospective studies with small sample sizes, thus limiting generalizability.
Based on data collected over four years at 11 different U.S. pediatric emergency departments, the new study found that treatment with lorazepam [among pediatric patients with convulsive status epilepticus] did not result in improved efficacy or safety, compared with diazepam.
That determination led the study authors to conclude: These findings do not support the preferential use of lorazepam for this condition.
Dr. Mahajan, a nationally recognized researcher in pediatric emergency medicine and a Wayne State University School of Medicine pediatrics professor recently appointed chair of the American Academy of Pediatrics Executive Committee of the Section on Emergency Medicine, said the JAMA study provides a compelling example of how effective research in pediatric medicine, based on treatment of patients right in the clinical setting, can play a major role in improving outcomes.
Childrens Hospital of Michigan Chief of Pediatrics Steven E. Lipshultz, M.D., said this recent breakthrough will undoubtedly result in better care for pediatric patients who present in the emergency room with seizures related to epilepsy.
Theres no doubt that combining excellent research with excellent treatment is the key to achieving the highest-quality outcomes for patients and Dr. Mahajans cutting-edge study is a terrific example of how kids are benefiting from the research that goes on here at Childrens every single day, said Dr. Lipshultz.
Using a type of human stem cell, Johns Hopkins researchers say they have created a three-dimensional complement of human retinal tissue in the laboratory, which notably includes functioning photoreceptor cells capable of responding to light, the first step in the process of converting it into visual images.

(Image caption: Rod photoreceptors (in green) within a “mini retina” derived from human iPS cells in the lab. Image courtesy of Johns Hopkins Medicine)
“We have basically created a miniature human retina in a dish that not only has the architectural organization of the retina but also has the ability to sense light,” says study leader M. Valeria Canto-Soler, Ph.D., an assistant professor of ophthalmology at the Johns Hopkins University School of Medicine. She says the work, reported online June 10 in the journal Nature Communications, “advances opportunities for vision-saving research and may ultimately lead to technologies that restore vision in people with retinal diseases.”
Like many processes in the body, vision depends on many different types of cells working in concert, in this case to turn light into something that can be recognized by the brain as an image. Canto-Soler cautions that photoreceptors are only part of the story in the complex eye-brain process of vision, and her lab hasn’t yet recreated all of the functions of the human eye and its links to the visual cortex of the brain. “Is our lab retina capable of producing a visual signal that the brain can interpret into an image? Probably not, but this is a good start,” she says.
The achievement emerged from experiments with human induced pluripotent stem cells (iPS) and could, eventually, enable genetically engineered retinal cell transplants that halt or even reverse a patient’s march toward blindness, the researchers say.
The iPS cells are adult cells that have been genetically reprogrammed to their most primitive state. Under the right circumstances, they can develop into most or all of the 200 cell types in the human body. In this case, the Johns Hopkins team turned them into retinal progenitor cells destined to form light-sensitive retinal tissue that lines the back of the eye.
Using a simple, straightforward technique they developed to foster the growth of the retinal progenitors, Canto-Soler and her team saw retinal cells and then tissue grow in their petri dishes, says Xiufeng Zhong, Ph.D., a postdoctoral researcher in Canto-Soler’s lab. The growth, she says, corresponded in timing and duration to retinal development in a human fetus in the womb. Moreover, the photoreceptors were mature enough to develop outer segments, a structure essential for photoreceptors to function.
Retinal tissue is complex, comprising seven major cell types, including six kinds of neurons, which are all organized into specific cell layers that absorb and process light, “see,” and transmit those visual signals to the brain for interpretation. The lab-grown retinas recreate the three-dimensional architecture of the human retina. “We knew that a 3-D cellular structure was necessary if we wanted to reproduce functional characteristics of the retina,” says Canto-Soler, “but when we began this work, we didn’t think stem cells would be able to build up a retina almost on their own. In our system, somehow the cells knew what to do.”
When the retinal tissue was at a stage equivalent to 28 weeks of development in the womb, with fairly mature photoreceptors, the researchers tested these mini-retinas to see if the photoreceptors could in fact sense and transform light into visual signals.
They did so by placing an electrode into a single photoreceptor cell and then giving a pulse of light to the cell, which reacted in a biochemical pattern similar to the behavior of photoreceptors in people exposed to light.
Specifically, she says, the lab-grown photoreceptors responded to light the way retinal rods do. Human retinas contain two major photoreceptor cell types called rods and cones. The vast majority of photoreceptors in humans are rods, which enable vision in low light. The retinas grown by the Johns Hopkins team were also dominated by rods.
Canto-Soler says that the newly developed system gives them the ability to generate hundreds of mini-retinas at a time directly from a person affected by a particular retinal disease such as retinitis pigmentosa. This provides a unique biological system to study the cause of retinal diseases directly in human tissue, instead of relying on animal models.
The system, she says, also opens an array of possibilities for personalized medicine such as testing drugs to treat these diseases in a patient-specific way. In the long term, the potential is also there to replace diseased or dead retinal tissue with lab-grown material to restore vision.
When it comes to familiarity, a slew of memories including seemingly unrelated ones can come flooding into the brain, according to mathematical theories called global similarity models.

After conducting an fMRI study on memory and categorization, researchers including a Texas Tech University psychologist have shown for the first time that these mathematical models seem to correctly explain processing in the medial temporal lobes, a region of the brain associated with long-term memory that is disrupted by memory disorders like Alzheimer’s disease.
The findings were published in The Journal of Neuroscience.
Tyler Davis, assistant director of Texas Tech’s Neuroimaging Institute and an assistant professor of psychology, specializes in neurobiological approaches to learning and memory. He was part of a team that delved into global similarity models.
“Since at least the 1980s, scientists researching memory have believed that when a person finds someone’s face or a new experience familiar, that person is not simply retrieving a memory of only this previous experience, but memories of many other related and unrelated experiences as well,” Davis said. “Formal mathematical theories of memory called global similarity models suggest that when we judge familiarity, we match an experience, such as a face or a trip to a restaurant, to all of the memories that we have stored in our brains. Our recent work using fMRI suggests these models are correct.”
People may believe when they see someone’s familiar face or take a trip to a familiar restaurant, they only activate the most similar or recent memories for comparison. However, Davis said this is not the case. According to global similarity models, the feeling of familiarity for the taste of brisket at a particular restaurant draws on a spectrum of memories that a person has stored in his or her brain.
Eating the brisket can activate memories not only of a previous trip to that restaurant, but also of the décor, eating brisket at a similar restaurant, what that person’s home-cooked brisket tastes like and even seemingly tangential memories such as a recent trip to another city.
“In terms of global similarity theories and our new findings, the important thing is when you are judging familiarity, your brain doesn’t just retrieve the most relevant memories but many other memories as well,” Davis said. “This seems counter-intuitive to how memory feels. We often feel like we are just retrieving that previous trip to that one particular restaurant when we are asked whether we’d been there before, but there is a lot of behavioral evidence that we activate many other memories as well when we judge familiarity.”
This does not mean that every memory we have stored contributes to familiarity in the same way. The more similar a previous memory is to the current experience, the more it will contribute to judgments of familiarity.
In terms of the brisket example, Davis said, previous trips to the restaurant are going to impact the familiarity more than dissimilar memories, such as the recent trip out of town. However, similarity from these other less-related experiences can have a measurable effect in judgments of familiarity.
In his recent research, Davis and others used fMRI to examine how memory similarity related to behavioral measures of familiarity, in terms of activation patterns in the medial temporal lobes.
“We found that peoples’ memory for the items in our experiments was related to their activation patterns in the medial temporal lobes in a manner that was anticipated by mathematical global similarity models,” Davis said. “The more similar the activation pattern for an item was to all of the other activation patterns, the more strongly people remembered it. This is consistent with global similarity models, which suggest that the items that are most similar to all other items stored in memory will be most familiar.”
The findings suggest that global similarity models may have a neurobiological basis, he said. This is evidence that similarity, in terms of neural processing, may impact memory. People may find things familiar not just because they are identical to things we’ve previously experienced, but because they are similar to a number of things we’ve previously experienced.
The ability to hear soft speech in a noisy environment is difficult for many and nearly impossible for the 48 million in the United States living with hearing loss. Researchers from the Massachusetts Eye and Ear, Harvard Medical School and Harvard University programmed a new type of game that trained both mice and humans to enhance their ability to discriminate soft sounds in noisy backgrounds. Their findings will be published in PNAS Online Early Edition the week of June 9-13, 2014.

In the experiment, adult humans and mice with normal hearing were trained on a rudimentary ‘audiogame’ inspired by sensory foraging behavior that required them to discriminate changes in the loudness of a tone presented in a moderate level of background noise. Their findings suggest new therapeutic options for clinical populations that receive little benefit from conventional sensory rehabilitation strategies.
“Like the children’s game ‘hot and cold’, our game provided instantaneous auditory feedback that allowed our human and mouse subjects to hone in on the location of a hidden target,” said senior author Daniel Polley, Ph.D., director of the Mass. Eye and Ear’s Amelia Peabody Neural Plasticity Unit of the Eaton-Peabody Laboratories and assistant professor of otology and laryngology at Harvard Medical School. “Over the course of training, both species learned adaptive search strategies that allowed them to more efficiently convert noisy, dynamic audio cues into actionable information for finding the target. To our surprise, human subjects who mastered this simple game over the course of 30 minutes of daily training for one month exhibited a generalized improvement in their ability to understand speech in noisy background conditions. Comparable improvements in the processing of speech in high levels of background noise were not observed for control subjects who heard the sounds of the game but did not actually play the game.”
The researchers recorded the electrical activity of neurons in auditory regions of the mouse cerebral cortex to gain some insight into how training might have boosted the ability of the brain to separate signal from noise. They found that training substantially altered the way the brain encoded sound.
In trained mice, many neurons became highly sensitive to faint sounds that signaled the location of the target in the game. Moreover, neurons displayed increased resistance to noise suppression; they retained an ability to encode faint sounds even under conditions of elevated background noise.
“Again, changes of this ilk were not observed in control mice that watched (and listened) to their counterparts play the game. Active participation in the training was required; passive listening was not enough,” Dr. Polley said.
These findings illustrate the utility of brain training exercises that are inspired by careful neuroscience research. “When combined with conventional assistive devices such as hearing aids or cochlear implants, ‘audiogames’ of the type we describe here may be able to provide the hearing impaired with an improved ability to reconnect to the auditory world. Of particular interest is the finding that brain training improved speech processing in noisy backgrounds – a listening environment where conventional hearing aids offer limited benefit,” concluded Dr. Jonathon Whitton, lead author on the paper. Dr. Whitton is a principal investigator at the Amelia Peabody Neural Plasticity Unit and affiliated with the Program in Speech Hearing Bioscience and Technology, Harvard–Massachusetts Institute of Technology Division of Health, Sciences, and Technology.
Anesthesia makes otherwise painful procedures possible by derailing a conscious brain, rendering it incapable of sensing or responding to a surgeon’s knife. But little research exists on what happens when the drugs wear off.

(Image caption: Unconscious states. New findings suggest the anesthetized brain must pass through certain ‘way stations’ on the path back to consciousness. Above, the prevalence of particular clusters of brain activity states as recorded in rats that had been administered an anesthetic. The longest appear in red and the shortest in yellow and green.)
“I always found it remarkable that someone can recover from anesthesia, not only that you blink your eyes and can walk around, but you return to being yourself. So if you learned how to do something on Sunday and on Monday, you have surgery, and you wake up and you still know how to do it,” says Alexander Proekt, a visiting fellow in Don Pfaff’s Laboratory of Neurobiology and Behavior at Rockefeller University and an anesthesiologist at Weill Cornell Medical College. “It seemed like there ought to be some kind of guide or path for the system to follow.”
The obvious explanation is that as the anesthetic washes out of the body, electrical activity in the brain gradually returns to its conscious patterns. However, new research by Proekt and colleagues suggests the trip back is not so simple.
“Using statistical analysis, our research shows that the recovery from deep anesthesia is not a smooth, linear process. Instead, there are dynamic ‘way stations’ or states of activity the brain must temporarily occupy on the way to full recovery,” Pfaff says. “These results have implications for understanding how someone’s ability to recover consciousness can be disrupted by, for example, brain injury.”
Proekt, along with former postdoc Andrew Hudson, now an assistant professor in anesthesiology at the University of California, Los Angeles, and Diany Paola Calderon, a research associate in the lab, put rats “under” using the common medical and veterinary anesthetic isoflurane. As the rats recovered, the team monitored the electrical potential outside neurons, known as local field potentials (LFPs), in particular parts of the brain known, from previous elecrophysiological and pharmacological studies, to be associated with wakefulness and anesthesia. These recordings gave them a sensitive handle on the activities of whole groups of neurons in particular parts of the thalamus and cortex.
In the awake brain, of both humans and rats, neurons generate electrical voltage that oscillates. Many of these oscillations together form a signal that appears as a squiggly line on a recording of brain activity, such as an LFP. When someone is asleep, under anesthesia, or in a coma, these oscillations occur more slowly, or at a low frequency. When he or she is awake, they speed up. The researchers examined the recordings from the rats’ brains to figure out how the electrical activity in these regions changed as they moved from anesthetized to awake.
“Recordings from each animal wound up having particular features that spontaneously appeared, suggesting their brain activity was abruptly transitioning through particular states,” Hudson says. “We analyzed the probability of a brain jumping from one state to another, and we found that certain states act as hubs through which the brain must pass to continue on its way to consciousness.” While the electrical activity in all the rats’ brains passed through these hubs, the precise path back to consciousness was not the same each time, the team reports today in the Proceedings of the National Academy of Sciences.
“These results suggest there is indeed an intrinsic way in which the unconscious brain finds its way back to consciousness. The anesthetic is just a tool for severely reducing brain activity in a way in which we can control,” Hudson says.
In other scenarios, including coma caused by brain injury or neurological disease, the disruption to brain activity cannot be controlled, making these states much more difficult to study. However, the team’s results may help explain what is going on in these cases. “Maybe a pathway has shut down, or a brain structure that was key for full consciousness is no longer working. We don’t know yet, but our results suggest the possibility that under certain circumstances, someone may be theoretically capable of returning to consciousness but, due to the inability to transition through the hubs we have identified, his or her brain is unable to navigate the way back,” Calderon says.
Researchers at The University of Queensland have made a key step that could eventually offer hope for stroke survivors and other people with brain damage.

The international study, led by researchers at UQ, could help explain a debilitating neurological condition known as unilateral spatial neglect, which commonly occurs after a stroke causing damage to the right side of the brain.
People with this condition become unaware of the left side of their sensory world, making everyday tasks such as eating and dressing almost impossible to perform.
ARC Discovery Early Career Research Fellow Dr Marta Garrido from UQ’s Queensland Brain Institute (QBI) said this lack of awareness on the left side, might be caused by an uneven brain network that involves interactions between different brain regions.
“Patients with spatial neglect are impaired in attending to sensory information on the left or the right side of space, but this inability is a lot stronger for objects coming from the left,” she said.
“This research has enabled us to establish what happens in a healthy brain, so that we can then further understand exactly what goes on in the brain of someone who is experiencing spatial neglect.”
QBI co-investigator and ARC Australian Laureate Fellow Professor Jason Mattingley said the human brain performed many functions in an uneven way.
“We already know that in a healthy brain even basic perception can be lopsided. For example, when we look at others’ faces we tend to focus more on the left than the right side,” he said.
“Research like this helps us take a key step in understanding some of the puzzling symptoms observed in people following brain damage.”
The researchers at QBI collaborated with UQ’s School of Psychology, and colleagues from Aarhus University in Denmark, and University College London in the UK.
The study involved recording electrical activity in the brains of healthy adult volunteers using electroencephalography (EEG) while listening to sequences of sounds from the left, right or centre.
The next step for the researchers will be to study how people with brain damage use the left and right sides of the brain when perceiving visual objects and sounds.
Findings of the study were published in The Journal of Neuroscience.
Levels of a small molecule found only in humans and in other primates are lower in the brains of depressed individuals, according to researchers at McGill University and the Douglas Institute. This discovery may hold a key to improving treatment options for those who suffer from depression.

Depression is a common cause of disability, and while viable medications exist to treat it, finding the right medication for individual patients often amounts to trial and error for the physician. In a new study to be published in the journal Nature Medicine, Dr. Gustavo Turecki, a psychiatrist at the Douglas and professor in the Faculty of Medicine, Department of Psychiatry at McGill, together with his team, discovered that the levels of a tiny molecule, miR-1202, may provide a marker for depression and help detect individuals who are likely to respond to antidepressant treatment.
“Using samples from the Douglas Bell-Canada Brain Bank, we examined brain tissues from individuals who were depressed and compared them with brain tissues from psychiatrically healthy individuals, says Turecki, who is also Director of the McGill Group for Suicide Studies, “We identified this molecule, a microRNA known as miR-1202, only found in humans and primates and discovered that it regulates an important receptor of the neurotransmitter glutamate”.
The team conducted a number of experiments that showed that antidepressants change the levels of this microRNA. “In our clinical trials with living depressed individuals treated with citalopram, a commonly prescribed antidepressant, we found lower levels in depressed individuals compared to the non-depressed individuals before treatment,” says Turecki. “Clearly, microRNA miR-1202 increased as the treatment worked and individuals no longer felt depressed.”
Antidepressant drugs are the most common treatment for depressive episodes, and are among the most prescribed medications in North America. “Although antidepressants are clearly effective, there is variability in how individuals respond to antidepressant treatment,” says Turecki, “We found that miR-1202 is different in individuals with depression and particularly, among those patients who eventually will respond to antidepressant treatment”.
The discovery may provide “a potential target for the development of new and more effective antidepressant treatments,” he adds.
New research from the University of Rochester Medical Center describes how exposure to air pollution early in life produces harmful changes in the brains of mice, including an enlargement of part of the brain that is seen in humans who have autism and schizophrenia.
As in autism and schizophrenia, the changes occurred predominately in males. The mice also performed poorly in tests of short-term memory, learning ability, and impulsivity.
The new findings are consistent with several recent studies that have shown a link between air pollution and autism in children. Most notably, a 2013 study in JAMA Psychiatry reported that children who lived in areas with high levels of traffic-related air pollution during their first year of life were three times as likely to develop autism.
“Our findings add to the growing body of evidence that air pollution may play a role in autism, as well as in other neurodevelopmental disorders,” said Deborah Cory-Slechta, Ph.D., professor of Environmental Medicine at the University of Rochester and lead author of the study, published in the journal Environmental Health Perspectives.
In three sets of experiments, Cory-Slechta and her colleagues exposed mice to levels of air pollution typically found in mid-sized U.S. cities during rush hour. The exposures were conducted during the first two weeks after birth, a critical time in the brain’s development. The mice were exposed to polluted air for four hours each day for two four-day periods.
In one group of mice, the brains were examined 24 hours after the final pollution exposure. In all of those mice, inflammation was rampant throughout the brain, and the lateral ventricles — chambers on each side of the brain that contain cerebrospinal fluid — were enlarged two-to-three times their normal size.

“When we looked closely at the ventricles, we could see that the white matter that normally surrounds them hadn’t fully developed,” said Cory-Slechta. “It appears that inflammation had damaged those brain cells and prevented that region of the brain from developing, and the ventricles simply expanded to fill the space.”
The problems were also observed in a second group of mice 40 days after exposure and in another group 270 days after exposure, indicating that the damage to the brain was permanent. Brains of mice in all three groups also had elevated levels of glutamate, a neurotransmitter, which is also seen in humans with autism and schizophrenia.
Most air pollution is made up mainly of carbon particles that are produced when fuel is burned by power plants, factories, and cars. For decades, research on the health effects of air pollution has focused on the part of the body where the damage is most obvious — the lungs. That research began to show that different-sized particles produce different effects. Larger particles — the ones regulated by the Environmental Protection Agency (EPA) — are actually the least harmful because they are coughed up and expelled. But many researchers believe that smaller particles known as ultrafine particles — which are not regulated by the EPA — are more dangerous, because they are small enough to travel deep into the lungs and be absorbed into the bloodstream, where they can produce toxic effects throughout the body.
That assumption led Cory-Slechta to design a set of experiments that would show whether ultrafine particles have a damaging effect on the brain, and if so, to reveal the mechanism by which they inflict harm. Her study published today is the first scientific work to do both.
“I think these findings are going to raise new questions about whether the current regulatory standards for air quality are sufficient to protect our children,” said Cory-Slechta.