In a study of nearly 1,000 mother-child pairs, researchers from the Bloomberg School of Public health found that prenatal exposure to selective serotonin reuptake inhibitors (SSRIs), a frequently prescribed treatment for depression, anxiety and other disorders, was associated with autism spectrum disorder (ASD) and developmental delays (DD) in boys. The study, published in the online edition of Pediatrics, analyzed data from large samples of ASD and DD cases, and population-based controls, where a uniform protocol was implemented to confirm ASD and DD diagnoses by trained clinicians using validated standardized instruments.
The study included 966 mother-child pairs from the Childhood Autism Risks from Genetics and the Environment (CHARGE) Study, a population-based case-control study based at the University of California at Davis’ MIND Institute. The researchers broke the data into three groups: Those diagnosed with autism spectrum disorder (ASD), those with developmental delays (DD) and those with typical development (TD). The children ranged in ages two to five. A majority of the children were boys – 82.5% in the ASD group were boys, 65.6% in the DD group were boys and 85.6% in the TD were boys. While the study included girls, the substantially stronger effect in boys alone suggests possible gender difference in the effect of prenatal SSRI exposure.
“We found prenatal SSRI exposure was nearly 3 times as likely in boys with ASD relative to typical development, with the greatest risk when exposure took place during the first trimester,” said Li-Ching Lee, Ph.D., Sc.M., psychiatric epidemiologist in the Bloomberg School’s Department of Epidemiology. “SSRI was also elevated among boys with DD, with the strongest exposure effect in the third trimester.”
The data analysis was completed by Rebecca Harrington, Ph.D., M.P.H, in conjunction with her doctoral dissertation at the Bloomberg School. Dr. Lee was one of her advisors.
Serotonin is critical to early brain development; exposure during pregnancy to anything that influences serotonin levels can have potential effect on birth and developmental outcomes. The prevalence of ADS continues to rise. According to the Centers for Disease Control and Prevention, an estimated 1 in 68 children in the U.S. is identified with ADS, and it is almost five times more common among boys than girls. One may question whether the increased use of SSRI in recent years is a contributor to the dramatic rise of ASD prevalence.
"This study provides further evidence that in some children, prenatal exposure to SSRIs may influence their risk for developing an autism spectrum disorder,” said Irva Hertz-Picciotto, Ph.D., M.P.H., chief of the Division of Environmental and Occupational Health in the UC Davis Department of Public Health Sciences and a researcher at the UC Davis MIND Institute. “This research also highlights the challenge for women and their physicians to balance the risks versus the benefits of taking these medications, given that a mother’s underlying mental-health conditions also may pose a risk, both to herself and her child.”
Regarding treatment, the authors note that maternal depression itself carries risks for the fetus, and the benefits of using SSRI during pregnancy should be considered carefully against the potential harm. The researchers also note that large sample studies are needed to investigate the effects in girls with ASD. Limitations of the study acknowledged include the difficulty in isolating SSRI effects from those of their indications for use, lack of information on SSRI dosage precluded dose-response analyses, and the relatively small sample of DD children resulted in imprecise estimates of association, which should be viewed with caution.
Differences in brain connectivity may help explain the social impairments common in those who have autism spectrum disorders, new research suggests.

The small study compared the brains of 25 teens with an autism spectrum disorder to those of 25 typically developing teens, all aged 11 to 18. The researchers found key differences between the two groups in brain “networks” that help people to figure out what others are thinking, and to understand others’ actions and emotions.
"It is generally agreed that the way the networks are organized is not typical [in those with autism]," explained study lead researcher Inna Fishman, assistant research professor of psychology at San Diego State University.
The prevailing idea until now, she said, has been that these neurological networks are under-connected in people with autism. However, “we found they were over-connected — they talk to each other way more than expected at that age.”
The study is published in the April 16 online edition of JAMA Psychiatry.
University of Missouri researchers have previously shown that a genetic pre-disposition to be more or less motivated to exercise exists. In a new study, Frank Booth, a professor in the MU College of Veterinary Medicine, has found a potential link between the genetic pre-disposition for high levels of exercise motivation and the speed at which mental maturation occurs.

For his study, Booth selectively bred rats that exhibited traits of either extreme activity or extreme laziness. Booth then put the rats in cages with running wheels and measured how much each rat willingly ran on their wheels during a six-day period. He then bred the top 26 runners with each other and bred the 26 rats that ran the least with each other. They repeated this process through 10 generations and found that the line of running rats chose to run 10 times more than the line of “lazy” rats.
Booth studied the brains of the rats and found much higher levels of neural maturation in the brains of the active rats than in the brains of the lazy rats.
“We looked at the part of the brain known as the ‘grand central station,’ or the hub where the brain is constantly sending and receiving signals,” Booth said. “We found a big difference between the amount of molecules present in the brains of active rats compared to the brains of lazy rats. This suggests that the active rats were experiencing faster development of neural pathways than the lazy rats.”
Booth says these findings may suggest a link between the genes responsible for exercise motivation and the genes responsible for mental development. He also says this research hints that exercising at a young age could help develop more neural pathways for motivation to be physically active.
“This study illustrates a potentially important link between exercise and the development of these neural pathways,” Booth said. “Ultimately, this could show the benefits of exercise for mental development in humans, especially young children with constantly growing brains.”
Scientists at the Salk Institute have created a new model of memory that explains how neurons retain select memories a few hours after an event.

This new framework provides a more complete picture of how memory works, which can inform research into disorders liked Parkinson’s, Alzheimer’s, post-traumatic stress and learning disabilities.
"Previous models of memory were based on fast activity patterns," says Terrence Sejnowski, holder of Salk’s Francis Crick Chair and a Howard Hughes Medical Institute Investigator. "Our new model of memory makes it possible to integrate experiences over hours rather than moments."
Over the past few decades, neuroscientists have revealed much about how long-term memories are stored. For significant events—for example, being bit by a dog—a number of proteins are quickly made in activated brain cells to create the new memories. Some of these proteins linger for a few hours at specific places on specific neurons before breaking down.
This series of biochemical events allow us to remember important details about that event—such as, in the case of the dog bite, which dog, where it was located and so on.
One problem scientists have had with modeling memory storage is explaining why only selective details and not everything in that 1-2 hour window is strongly remembered. By incorporating data from previous literature, Sejnowski and first author Cian O’Donnell, a Salk postdoctoral researcher, developed a model that bridges findings from both molecular and systems observations of memory to explain how this 1-2 hour memory window works. The work is detailed in the latest issue of Neuron.
Using computational modeling, O’Donnell and Sejnowski show that, despite the proteins being available to a number of neurons in a given circuit, memories are retained when subsequent events activate the same neurons as the original event. The scientists found that the spatial positioning of proteins at both specific neurons and at specific areas around these neurons predicts which memories are recorded. This spatial patterning framework successfully predicts memory retention as a mathematical function of time and location overlap.
"One thing this study does is link what’s happing in memory formation at the cellular level to the systems level," says O’Donnell. "That the time window is important was already established; we worked out how the content could also determine whether memories were remembered or not. We prove that a set of ideas are consistent and sufficient to explain something in the real world."
The new model also provides a potential framework for understanding how generalizations from memories are processed during dreams.
While much is still unknown about sleep, research suggests that important memories from the day are often cycled through the brain, shuttled from temporary storage in the hippocampus to more long-term storage in the cortex. Researchers observed most of this memory formation in non-dreaming sleep. Little is known about if and how memory packaging or consolidation is done during dreams. However, O’Donnell and Sejnowski’s model suggests that some memory retention does happen during dreams.
"During sleep there’s a reorganizing of memory—you strengthen some memories and lose ones you don’t need anymore," says O’Donnell. "In addition, people learn abstractions as they sleep, but there was no idea how generalization processes happen at a neural level."
By applying their theoretical findings on overlap activity within the 1-2 hour window, they came up with a theoretical model for how the memory abstraction process might work during sleep.
A class of drugs developed to treat immune-related conditions and cancer – including one currently in clinical trials for glioblastoma and other tumors – eliminates neural inflammation associated with dementia-linked diseases and brain injuries, according to UC Irvine researchers.

In their study, assistant professor of neurobiology & behavior Kim Green and colleagues discovered that the drugs, which can be delivered orally, eradicated microglia, the primary immune cells of the brain. These cells exacerbate many neural diseases, including Alzheimer’s and Parkinson’s, as well as brain injury.
“Because microglia are implicated in most brain disorders, we feel we’ve found a novel and broadly applicable therapeutic approach,” Green said. “This study presents a new way to not just modulate inflammation in the brain but eliminate it completely, making this a breakthrough option for a range of neuroinflammatory diseases.”
The researchers focused on the impact of a class of drugs called CSF1R inhibitors on microglial function. In mouse models, they learned that inhibition led to the removal of virtually all microglia from the adult central nervous system with no ill effects or deficits in behavior or cognition. Because these cells contribute to most brain diseases – and can harm or kill neurons – the ability to eradicate them is a powerful advance in the treatment of neuroinflammation-linked disorders.
Green said his group tested several selective CSF1R inhibitors that are under investigation as cancer treatments and immune system modulators. Of these compounds, they found the most effective to be a drug called PLX3397, created by Plexxikon Inc., a Berkeley, Calif.-based biotechnology company and member of the Daiichi Sankyo Group. PLX3397 is currently being evaluated in phase one and two clinical trials for multiple cancers, including glioblastoma, melanoma, breast cancer and leukemia.
Crucially, microglial elimination lasted only as long as treatment continued. Withdrawal of inhibitors produced a rapid repopulation of cells that then grew into new microglia, said Green, who’s a member of UC Irvine’s Institute for Memory Impairments and Neurological Disorders.
This means that eradication of these immune cells is fully reversible, allowing researchers to bring microglia back when desired. Green added that this work is the first to describe a new progenitor/potential stem cell in the central nervous system outside of neurogenesis, a discovery that points to novel opportunities for manipulating microglial populations during disease.
Carrying a copy of a gene variant called ApoE4 confers a substantially greater risk for Alzheimer’s disease on women than it does on men, according to a new study by researchers at the Stanford University School of Medicine.

The scientists arrived at their findings by analyzing data on large numbers of older individuals who were tracked over time and noting whether they had progressed from good health to mild cognitive impairment — from which most move on to develop Alzheimer’s disease within a few years — or to Alzheimer’s disease itself.
The discovery holds implications for genetic counselors, clinicians and individual patients, as well as for clinical-trial designers. It could also help shed light on the underlying causes of Alzheimer’s disease, a progressive neurological syndrome that robs its victims of their memory and ability to reason. Its incidence increases exponentially after age 65. An estimated one in every eight people past that age in the United States has Alzheimer’s. Experts project that by mid-century, the number of Americans with Alzheimer’s will more than double from the current estimate of 5-6 million.
According to the Alzheimer’s Association, it is already the nation’s most expensive disease, costing more than $200 million annually. (The epidemiology of mild cognitive impairment is fuzzier, but this gateway syndrome is clearly more widespread than Alzheimer’s.)
Research at the University of Adelaide has shed new light onto the possible causes of sudden infant death syndrome (SIDS), which could help to prevent future loss of children’s lives.
In a world-first study, researchers in the University’s School of Medical Sciences have found that telltale signs in the brains of babies that have died of SIDS are remarkably similar to those of children who died of accidental asphyxiation.
"This is a very important result. It helps to show that asphyxia rather than infection or trauma is more likely to be involved in SIDS deaths," says the leader of the project, Professor Roger Byard AO, Marks Professor of Pathology at the University of Adelaide and Senior Specialist Forensic Pathologist with Forensic Science SA.
The study compared 176 children who died from head trauma, infection, drowning, asphyxia and SIDS.
Researchers were looking at the presence and distribution of a protein called β-amyloid precursor protein (APP) in the brain. This “APP staining”, as it’s known, could be an important tool for showing how children have died. This is the first time a detailed study of APP has been undertaken in SIDS cases.
"All 48 of the SIDS deaths we looked at showed APP staining in the brain," Professor Byard says.
"The staining by itself does not necessarily tell us the cause of death, but it can help to clarify the mechanism.
"The really interesting point is that the pattern of APP staining in SIDS cases - both the amount and distribution of the staining - was very similar to those in children who had died from asphyxia."
Professor Byard says that in one case, the presence of APP staining in a baby who had died of SIDS led to the identification of a significant sleep breathing problem, or apnoea, in the deceased baby’s sibling.
"This raised the possibility of an inherited sleep apnoea problem, and this knowledge could be enough to help save a child’s life," Professor Byard says.
"Because of the remarkable similarity in SIDS and asphyxia cases, the question is now: is there an asphyxia-based mechanism of death in SIDS? We don’t know the answer to that yet, but it looks very promising."
This study was conducted at the University of Adelaide by visiting postdoctoral researcher Dr Lisbeth Jensen from Aarhus University Hospital, Denmark, and was funded by SIDS and Kids South Australia. The results have been published in the journal Neuropathology and Applied Neurobiology.
"This work also fits in very well with collaborative research that is currently being undertaken between the University of Adelaide and Harvard University, on chemical changes in parts of the brain that control breathing," Professor Byard says.
By solving a long standing scientific mystery, the common saying “you just hit a nerve” might need to be updated to “you just hit a Merkel cell,” jokes Jianguo Gu, PhD, a pain researcher at the University of Cincinnati (UC).
That’s because Gu and his research colleagues have proved that Merkel cells— which contact many sensory nerve endings in the skin—are the initial sites for sensing touch.

"Scientists have spent over a century trying to understand the function of this specialized skin cell and now we are the first to know … we’ve proved the Merkel cell to be a primary point of tactile detection," Gu, principal investigator and a professor in UC’s department of anesthesiology, says of their research study published in the April 15 edition of Cell, a leading scientific journal.
Of all the five senses, touch, Gu says, has been the least understood by science—especially in relation to the Merkel cell, discovered by Friedrich Sigmund Merkel in 1875.
"It’s been a great debate because for over two centuries nobody really knew what function this cell had," Gu says, adding that while some scientists—including him—suspected that the Merkel cell was related to touch because of the high abundance of these cells in the ridges of fingertips, the lips and other touch sensitive spots throughout the body; others dismissed the cell as not related to sensing touch at all.
To prove their hypothesis that Merkel cells were indeed the very foundation of touch, Gu’s team—which included UC postgraduate fellow Ryo Ikeda, PhD—studied Merkel cells in rat whisker hair follicles , because the hair follicles are functionally similar to human fingertips and have high abundance of Merkel cells. What they found was that the cells immediately fired up in response to gentle touch of whiskers.
"There was a marked response in Merkel cells; the recording trace ‘spiked’. With non-Merkel cells you don’t get anything," says Ikeda.
What they also found, and of equal importance, both say, was that gentle touch makes Merkel cells to fire “action potentials” and this mechano-electrical transduction was through a receptor/ion channel called the Piezo2.
"The implications here are profound," Gu says, pointing to the clinical applications of treating and preventing disease states that affect touch such as diabetes and fibromyalgia and pathological conditions such as peripheral neuropathy. Abnormal touch sensation, he says, can also be a side effect of many medical treatments such as with chemotherapy.
The discovery also has relevance to those who are blind and rely on touch to navigate a sighted world.
"This is a paradigm shift in the entire field," Gu says, pointing to touch as also indispensable for environmental exploration, tactile discrimination and other tasks in life such as modern social interaction.
"Think of the cellphone. You can hardly fit into social life without good touch sensation."
Young adults who used marijuana only recreationally showed significant abnormalities in two key brain regions that are important in emotion and motivation, scientists report. The study was a collaboration between Northwestern Medicine® and Massachusetts General Hospital/Harvard Medical School.

This is the first study to show casual use of marijuana is related to major brain changes. It showed the degree of brain abnormalities in these regions is directly related to the number of joints a person smoked per week. The more joints a person smoked, the more abnormal the shape, volume and density of the brain regions.
"This study raises a strong challenge to the idea that casual marijuana use isn’t associated with bad consequences," said corresponding and co-senior study author Hans Breiter, M.D. He is a professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine and a psychiatrist at Northwestern Memorial Hospital.
"Some of these people only used marijuana to get high once or twice a week," Breiter said. "People think a little recreational use shouldn’t cause a problem, if someone is doing OK with work or school. Our data directly says this is not the case."
The study will be published April 16 in the Journal of Neuroscience.
Scientists examined the nucleus accumbens and the amygdala — key regions for emotion and motivation, and associated with addiction — in the brains of casual marijuana users and non-users. Researchers analyzed three measures: volume, shape and density of grey matter (i.e., where most cells are located in brain tissue) to obtain a comprehensive view of how each region was affected.
Both these regions in recreational pot users were abnormally altered for at least two of these structural measures. The degree of those alterations was directly related to how much marijuana the subjects used.
Of particular note, the nucleus acccumbens was abnormally large, and its alteration in size, shape and density was directly related to how many joints an individual smoked.
"One unique strength of this study is that we looked at the nucleus accumbens in three different ways to get a detailed and consistent picture of the problem," said lead author Jodi Gilman, a researcher in the Massachusetts General Center for Addiction Medicine and an instructor in psychology at Harvard Medical School. "It allows a more nuanced picture of the results."
Examining the three different measures also was important because no single measure is the gold standard. Some abnormalities may be more detectable using one type of neuroimaging analysis method than another. Breiter said the three measures provide a multidimensional view when integrated together for evaluating the effects of marijuana on the brain.
"These are core, fundamental structures of the brain," said co-senior study author Anne Blood, director of the Mood and Motor Control Laboratory at Massachusetts General and assistant professor of psychiatry at Harvard Medical School. "They form the basis for how you assess positive and negative features about things in the environment and make decisions about them."
Through different methods of neuroimaging, scientists examined the brains of young adults, ages 18 to 25, from Boston-area colleges; 20 who smoked marijuana and 20 who didn’t. Each group had nine males and 11 females. The users underwent a psychiatric interview to confirm they were not dependent on marijuana. They did not meet criteria for abuse of any other illegal drugs during their lifetime.
The changes in brain structures indicate the marijuana users’ brains are adapting to low-level exposure to marijuana, the scientists said.
The study results fit with animal studies that show when rats are given tetrahydrocannabinol (THC) their brains rewire and form many new connections. THC is the mind-altering ingredient found in marijuana.
"It may be that we’re seeing a type of drug learning in the brain," Gilman said. "We think when people are in the process of becoming addicted, their brains form these new connections."
In animals, these new connections indicate the brain is adapting to the unnatural level of reward and stimulation from marijuana. These connections make other natural rewards less satisfying.
"Drugs of abuse can cause more dopamine release than natural rewards like food, sex and social interaction," Gilman said. "In those you also get a burst of dopamine but not as much as in many drugs of abuse. That is why drugs take on so much salience, and everything else loses its importance."
The brain changes suggest that structural changes to the brain are an important early result of casual drug use, Breiter said. “Further work, including longitudinal studies, is needed to determine if these findings can be linked to animal studies showing marijuana can be a gateway drug for stronger substances,” he noted.
Because the study was retrospective, researchers did not know the THC content of the marijuana, which can range from 5 to 9 percent or even higher in the currently available drug. The THC content is much higher today than the marijuana during the 1960s and 1970s, which was often about 1 to 3 percent, Gilman said.
Marijuana is the most commonly used illicit drug in the U.S. with an estimated 15.2 million users, the study reports, based on the National Survey on Drug Use and Health in 2008. The drug’s use is increasing among adolescents and young adults, partially due to society’s changing beliefs about cannabis use and its legal status.
A recent Northwestern study showed chronic use of marijuana was linked to brain abnormalities. “With the findings of these two papers,” Breiter said, “I’ve developed a severe worry about whether we should be allowing anybody under age 30 to use pot unless they have a terminal illness and need it for pain.”
Researchers using information provided by a magnetic resonance imaging (MRI) technique have identified regional white matter damage in the brains of people who experience chronic dizziness and other symptoms after concussion.
The findings suggest that information provided by MRI can speed the onset of effective treatments for concussion patients. The results of this research are published online in the journal Radiology.

Concussions, also known as mild traumatic brain injury (mTBI), affect between 1.8 and 3.8 million individuals in the United States annually.
Past research has long indicated that depression is a big risk factor for memory loss in aging adults. But it is still unclear exactly how the two issues are related and whether there is potential to slow memory loss by fighting depression.

A preliminary study conducted by researchers from the University of Rochester School of Medicine and Dentistry and the School of Nursing, and published in the 42nd edition of Psychoneuroendocrinology in April, delves more deeply into the relationship between depression and memory loss, and how this connection may depend on levels of insulin-like growth factor, or IGF-1.
Prior research has shown that IGF-1, a hormone that helps bolster growth, is important for preserving memory, especially among older adults.
The collaborative study found that people with lower cognitive ability were more likely to have had higher depressive symptoms if they also had low levels of IGF-1. Reversely, participants with high levels of IGF-1 had no link between depressive symptoms and memory.
Senior author Kathi L. Heffner, Ph.D., assistant professor in the School of Medicine and Dentistry’s Department of Psychiatry, had originally examined possible associations between IGF-1 and memory in a sample of 94 healthy older adults, but couldn’t find strong or consistent evidence.
Heffner then approached the study’s lead author Feng (Vankee) Lin, Ph.D, R.N., assistant professor at the School of Nursing, for input because of her expertise in cognitive aging. Lin is a young nurse researcher whose collaborative work focuses on developing multi-model interventions to slow the progression of cognitive decline in at-risk adults, and reduce their risk of developing dementia and Alzheimer’s disease.
“Vankee spearheaded the idea to examine the role of depressive symptoms in these data, which resulted in the interesting link,” Heffner said.
The association discovered between memory loss, depression and IGF-1 means that IGF-1 could be a very promising factor in protecting memory, Lin said.
“IGF-1 is currently a hot topic in terms of how it can promote neuroplasticity and slow cognitive decline,” Lin said. “Depression, memory and the IGF-1 receptor are all located in a brain region which regulates a lot of complicated cognitive ability. As circulating IGF-1 can pass through the blood-brain barrier, it may work to influence the brain in a protective way.”
Lin said more data studies are needed of people with depression symptoms and those with Alzheimer’s disease, but this study opens an important door for further research on the significance of IGF-1 levels in both memory loss and depression.
“It really makes a lot of sense to further develop this study,” Lin said. “If this could be a way to simultaneously tackle depression while preventing cognitive decline it could be a simple intervention to implement.”
Heffner said that clinical trials are underway to determine whether IGF-1 could be an effective therapeutic agent to slow or prevent cognitive decline in people at risk.
“Cognitive decline can also increase risk for depressive symptoms, so if IGF-1 protects people from cognitive decline, this may translate to reduced risk for depression as well,” Heffner said.
Working with human neurons and fruit flies, researchers at Johns Hopkins have identified and then shut down a biological process that appears to trigger a particular form of Parkinson’s disease present in a large number of patients. A report on the study, in the April 10 issue of the journal Cell, could lead to new treatments for this disorder.

“Drugs such as L-dopa can, for a time, manage symptoms of Parkinson’s disease, but as the disease worsens, tremors give way to immobility and, in some cases, to dementia. Even with good treatment, the disease marches on,” says Ted Dawson, M.D., Ph.D., professor of neurology and director of the Johns Hopkins Institute for Cell Engineering, Dawson says the new research builds on a growing body of knowledge about the origins of Parkinson’s disease, whose symptoms appear when dopamine-producing nerve cells in the brain degenerate. Further evidence for a role of genetics in Parkinson’s disease appeared a decade ago when researchers identified key mutations in an enzyme known as leucine-rich repeat kinase 2, or LRRK2 — pronounced “lark2.” When that enzyme was cloned, Dawson, together with his wife and longtime collaborator Valina Dawson, Ph.D., professor of neurology and member of the Institute for Cell Engineering, discovered that LRRK2 was a kinase, a type of enzyme that transfers phosphate groups to proteins and turns proteins on or off to change their activity.
Over the years, it was found that blocking kinase activity in mutated LRRK2 halted degeneration, while enhancing it made things worse. But nobody knew what proteins LRRK2 was acting on.
"For nearly a decade, scientists have been trying to figure out how mutations in LRRK2 cause Parkinson’s disease," said Margaret Sutherland, Ph.D., a program director at the National Institute of Neurological Disorders and Stroke. "This study represents a clear link between LRRK2 and a pathogenic mechanism linked to Parkinson’s disease."
Dawson went fishing for the right proteins using LRRK2 as bait. When his team began to identify those proteins, Dawson says they were surprised to discover that many were linked to the cellular machinery, like ribosomes, that make proteins. Nobody, says Dawson, suspected that LRRK2 might be involved at such a basic level as protein manufacture.
Unsure if they were right, the team then tested the proteins they identified to see which of them, if any, LRRK2 could add phosphate groups to. They came up with three ribosomal protein candidates — s11, s15 and s27. They then altered each ribosomal protein to see what would happen. It turned out that mutating s15 in a manner that blocked LRRK2 phosphorylation protected nerve cells taken from rats, humans and fruit flies from death. In other words, s15 appeared to be the much sought-after target of LRRK2, Dawson says.
"When you go fishing, you want to catch fish. We just happened to catch a big one,” Dawson says.
With the protein now identified, Dawson’s team is tackling further experiments to find out how excess protein production causes dopamine neurons to degenerate. And they want to see what happens when they block LRRK2 from phosphorylating the s15 protein in mice, to build on their findings from fruit flies and nerve cells grown in a dish.
“There’s a big chasm between animal disease models and human treatments,” says Ian Martin, Ph.D., a neuroscientist in Dawson’s lab and the lead author on the paper. “But it’s exciting. I think it definitely could turn into something real, hopefully in my lifetime.”
How nerve cells flexibly adapt to acoustic signals: Depending on the input signal, neurons generate action potentials either near or far away from the cell body. This flexibility improves our ability to localize sound sources.

(Image caption: A neuron in the brain stem, that processes acoustic information. Depending on the situation, the cell generates action potentials in the axon (thin process) either close to or far from the body. Photo: Felix Felmy)
In order to process acoustic information with high temporal fidelity, nerve cells may flexibly adapt their mode of operation according to the situation. At low input frequencies, they generate most outgoing action potentials close to the cell body. Following inhibitory or high frequency excitatory signals, the cells produce many action potentials more distantly. This way, they are highly sensitive to the different types of input signals. These findings have been obtained by a research team headed by Professor Christian Leibold, Professor Benedikt Grothe, and Dr. Felix Felmy from the LMU Munich and the Bernstein Center and the Bernstein Focus Neurotechnology in Munich, who used computer models in their study. The researchers report their results in the latest issue of The Journal of Neuroscience.
Did the bang come from ahead or from the right? In order to localize sound sources, nerve cells in the brain stem evaluate the different arrival times of acoustic signals at the two ears. Being able to detect temporal discrepancies of up to 10 millionths of a second, the neurons have to become excited very quickly. In this process, they change the electrical voltage that prevails on their cell membrane. If a certain threshold is exceeded, the neurons generate a strong electrical signal — a so-called action potential — which can be transmitted efficiently over long axon distances without weakening. In order to reach the threshold, the input signals are summed up. This is achieved easier, the slower the nerve cells alter their electrical membrane potential.
Input signals are optimally processed
These requirements — rapid voltage changes for a high temporal resolution of the input signals, and slow voltage changes for an optimal signal integration that is necessary for the generation of an action potential — represent a paradoxical challenge for the nerve cell. “This problem is solved by nature by spatially separating the two processes. While input signals are processed in the cell body and the dendrites, action potentials are generated in the axon, a cell process,” says Leibold, leader of the study. But how sustainable is the spatial separation?
In their study, the researchers measured the axons’ geometry and the threshold of the corresponding cells and then constructed a computer model that allowed them to investigate the effectiveness of this spatial separation. The researchers’ model predicts that depending on the situation, neurons produce action potentials with more or less proximity to the cell body. For high frequency or inhibitory input signals, the cells will shift the location from the axon’s starting point to more distant regions. In this way, the nerve cells ensure that the various kinds of input signals are optimally processed — and thus allow us to perceive both small and large acoustic arrival time differences well, and thereby localize sounds in space.