Although the technology has existed for just a few years, scientists increasingly use “disease in a dish” models to study genetic, molecular and cellular defects. But a team of doctors and scientists led by researchers at the Cedars-Sinai Regenerative Medicine Institute went further in a study of Lou Gehrig’s disease, a fatal disorder that attacks muscle-controlling nerve cells in the brain and spinal cord.
After using an innovative stem cell technique to create neurons in a lab dish from skin scrapings of patients who have the disorder, the researchers inserted molecules made of small stretches of genetic material, blocking the damaging effects of a defective gene and, in the process, providing “proof of concept” for a new therapeutic strategy – an important step in moving research findings into clinical trials.
The study, published Oct. 23 in Science Translational Medicine, is believed to be one of the first in which a specific form of Lou Gehrig’s disease, or amyotrophic lateral sclerosis, was replicated in a dish, analyzed and “treated,” suggesting a potential future therapy all in a single study.
"In a sense, this represents the full spectrum of what we are trying to accomplish with patient-based stem cell modeling. It gives researchers the opportunity to conduct extensive studies of a disease’s genetic and molecular makeup and develop potential treatments in the laboratory before translating them into patient trials," said Robert H. Baloh, MD, PhD, director of Cedars-Sinai’s Neuromuscular Division in the Department of Neurology and director of the multidisciplinary ALS Program. He is the lead researcher and the article’s senior author.
Laboratory models of diseases have been made possible by a recently invented process using induced pluripotent stem cells – cells derived from a patient’s own skin samples and “sent back in time” through genetic manipulation to an embryonic state. From there, they can be made into any cell of the human body.
The cells used in the study were produced by the Induced Pluripotent Stem Cell Core Facility of Cedars-Sinai’s Regenerative Medicine Institute. Dhruv Sareen, PhD, director of the iPSC facility and a faculty research scientist with the Department of Biomedical Sciences, is the article’s first author and one of several institute researchers who participated in the study.
"In these studies, we turned skin cells of patients who have ALS into motor neurons that retained the genetic defects of the disease," Baloh said. "We focused on a gene, C9ORF72, that two years ago was found to be the most common cause of familial ALS and frontotemporal lobar degeneration, and even causes some cases of Alzheimer’s and Parkinson’s disease. What we needed to know, however, was how the defect triggered the disease so we could find a way to treat it."
Frontotemporal lobar degeneration is a brain disorder that typically leads to dementia and sometimes occurs in tandem with ALS.
The researchers found that the genetic defect of C9ORF72 may cause disease because it changes the structure of ribonucleic acid (RNA) coming from the gene, creating an abnormal buildup of a repeated set of nucleotides, the basic components of RNA.
"We think this buildup of thousands of copies of the repeated sequence GGGGCC in the nucleus of patients’ cells may become "toxic" by altering the normal behavior of other genes in motor neurons," Baloh said. "Because our studies supported the toxic RNA mechanism theory, we used two small segments of genetic material called antisense oligonucleotides – ASOs – to block the buildup and degrade the toxic RNA. One ASO knocked down overall C9ORF72 levels. The other knocked down the toxic RNA coming from the gene without suppressing overall gene expression levels. The absence of such potentially toxic RNA, and no evidence of detrimental effect on the motor neurons, provides a strong basis for using this strategy to treat patients suffering from these diseases."
Researchers from another institution recently led a phase one trial of a similar ASO strategy to treat ALS caused by a different genetic mutation and reportedly uncovered no safety issues.
An international research consortium led by investigators at Massachusetts General Hospital (MGH) and the University of Chicago has answered several questions about the genetic background of obsessive-compulsive disorder (OCD) and Tourette syndrome (TS), providing the first direct confirmation that both are highly heritable and also revealing major differences between the underlying genetic makeup of the disorders. Their report is being published in the October issue of the open-access journal PLOS Genetics.
"Both TS and OCD appear to have a genetic architecture of many different genes – perhaps hundreds in each person – acting in concert to cause disease,” says Jeremiah Scharf, MD, PhD, of the Psychiatric and Neurodevelopmental Genetics Unit in the MGH Departments of Psychiatry and Neurology, senior corresponding author of the report. “By directly comparing and contrasting both disorders, we found that OCD heritability appears to be concentrated in particular chromosomes – particularly chromosome 15 – while TS heritability is spread across many different chromosomes.”
An anxiety disorder characterized by obsessions and compulsions that disrupt the lives of patients, OCD is the fourth most common psychiatric illness. TS is a chronic disorder characterized by motor and vocal tics that usually begins in childhood and is often accompanied by conditions like OCD or attention-deficit hyperactivity disorder. Both conditions have been considered to be heritable, since they are known to often recur in close relatives of affected individuals, but identifying specific genes that confer risk has been challenging.
Two reports published last year in the journal Molecular Psychiatry (1, 2), with leadership from Scharf and several co-authors of the current study, described genome-wide association studies (GWAS) of thousands of affected individuals and controls. While those studies identified several gene variants that appeared to increase the risk of each disorder, none of the associations were strong enough to meet the strict standards of genome-wide significance. Since the GWAS approach is designed to identify relatively common gene variants and it has been proposed that OCD and TS might be influenced by a number of rare variants, the research team adopted a different method. Called genome-wide complex trait analysis (GCTA), the approach allows simultaneous comparision of genetic variation across the entire genome, rather than the GWAS method of testing sites one at a time, as well as estimating the proportion of disease heritability caused by rare and common variants.
"Trying to find a single causative gene for diseases with a complex genetic background is like looking for the proverbial needle in a haystack,” says Lea Davis, PhD, of the section of Genetic Medicine at the University of Chicago, co-corresponding author of the PLOS Genetics report. “With this approach, we aren’t looking for individual genes. By examining the properties of all genes that could contribute to TS or OCD at once, we’re actually testing the whole haystack and asking where we’re more likely to find the needles.”
Using GCTA, the researchers analyzed the same genetic datasets screened in the Molecular Psychiatry reports – almost 1,500 individuals affected with OCD compared with more than 5,500 controls, and nearly TS 1,500 patients compared with more than 5,200 controls. To minimize variations that might result from slight difference in experimental techniques, all genotyping was done by collaborators at the Broad Institute of Harvard and MIT, who generated the data at the same time using the same equipment. Davis was able to analyze the resulting data on a chromosome-by-chromosome basis, along with the frequency of the identified variants and the function of variants associated with each condition.
The results found that the degree of heritability for both disorders captured by GWAS variants is actually quite close to what previously was predicted based on studies of families impacted by the disorders. “This is a crucial point for genetic researchers, as there has been a lot of controversy in human genetics about what is called ‘missing heritability’,” explains Scharf. “For many diseases, definitive genome-wide significant variants account for only a minute fraction of overall heritability, raising questions about the validity of the approach. Our findings demonstrate that the vast majority of genetic susceptibility to TS and OCD can be discovered using GWAS methods. In fact, the degree of heritability captured by GWAS variants is higher for TS and OCD than for any other complex trait studied to date.”
Nancy Cox, PhD, section chief of Genetic Medicine at the University of Chicago and co-senior author of the PLOS Genetics report, adds, “Despite the fact that we confirm there is shared genetic liability between these two disorders, we also show there are notable differences in the types of genetic variants that contribute to risk. TS appears to derive about 20 percent of genetic susceptibility from rare variants, while OCD appears to derive all of its susceptibility from variants that are quite common, which is something that has not been seen before.”
In terms of the potential impact of the risk-associated variants, about half the risk for both disorders appears to be accounted for by variants already known to influence the expression of genes in the brain. Further investigation of those findings could lead to identification of the affected genes and how the expression changes contribute to the development of TS and OCD. Additional studies in even larger patient populations, some of which are in the planning stages, could identify the biologic pathways disrupted in the disorder, potentially leading to new therapeutic approaches.
Three projects have been awarded funding by the National Institutes of Health to develop innovative robots that work cooperatively with people and adapt to changing environments to improve human capabilities and enhance medical procedures. Funding for these projects totals approximately $2.4 million over the next five years, subject to the availability of funds.
The awards mark the second year of NIH’s participation in the National Robotics Initiative (NRI), a commitment among multiple federal agencies to support the development of a new generation of robots that work cooperatively with people, known as co-robots.
“These projects have the potential to transform common medical aids into sophisticated robotic devices that enhance mobility for individuals with visual and physical impairments in ways only dreamed of before,” said NIH Director Francis S. Collins, M.D., Ph.D. “In addition, as we continue to rely on robots to carry out complex medical procedures, it will become increasingly important for these robots to be able to sense and react to changing and unpredictable environments within the body. By supporting projects that develop these capabilities, we hope to increase the accuracy and safety of current and future medical robots.”
NIH is participating in the NRI with the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Department of Agriculture. NIH has funded three projects to help develop co-robots that can assist researchers, patients, and clinicians.

A Co-Robotic Navigation Aid for the Visually Impaired: The goal is to develop a co-robotic cane for the visually impaired that has enhanced navigation capabilities and that can relay critical information about the environment to its user. Using computer vision, the proposed cane will be able to recognize indoor structures such as stairways and doors, as well as detect potential obstacles. Using an intuitive human-device interaction mechanism, the cane will then convey the appropriate travel direction to the user. In addition to increasing mobility for the visually impaired and thus quality of life, methods developed in the creation of this technology could lead to general improvements in the autonomy of small robots and portable robotics that have many applications in military surveillance, law enforcement, and search and rescue efforts. Cang Ye, Ph.D., University of Arkansas at Little Rock (co-funded by the National Institute of Biomedical Imaging and Bioengineering [NIBIB] and the National Eye Institute).

MRI-Guided Co-Robotic Active Catheter: Atrial fibrillation is an irregular heartbeat that can increase the risk of stroke and heart disease. By purposefully ablating (destroying) specific areas of the heart in a controlled fashion, the propagation of irregular heart activity can be prevented. This is generally achieved by threading a catheter with an electrode at its tip through a vein in the groin until it reaches the patient’s heart. However, the constant movement of the heart as well as unpredictable changes in blood flow can make it difficult to maintain consistent contact with the heart during the ablation procedure, occasionally resulting in too large or too small of a lesion. The aim is to develop a co-robotic catheter that uses novel robotic planning strategies to compensate for physiological movements of the heart and blood and that can be used while a patient undergoes MRI — an imaging method used to take pictures of soft tissues in the body such as the heart. By combining state-of-the art robotics with high-resolution, real-time imaging, the co-robotic catheter could significantly increase the accuracy and repeatability of atrial fibrillation ablation procedures. M. Cenk Cavusoglu, Ph.D., Case Western Reserve University, Cleveland (funded by NIBIB).

Novel Platform for Rapid Exploration of Robotic Ankle Exoskeleton Control: Wearable robots, such as powered braces for the lower extremities, can improve mobility for individuals with impaired strength and coordination due to aging, spinal cord injury, cerebral palsy, or stroke. However, methods for determining the optimal design of an assistive device for use within a specific patient population are lacking. This project proposes to create an experimental platform for an assistive ankle robot to be used in patients recovering from stroke. The platform will allow investigators to systematically test various robotic control methods and to compare them based on measurable physiological outcomes. Results from these tests will provide evidence for making more effective, less expensive, and more manageable assistive technologies. Stephen G. Sawicki, Ph.D., North Carolina State University, Raleigh; Steven Collins, Ph.D., Carnegie Mellon University, Pittsburgh (co-funded by the National Institute of Nursing Research and NSF).
These projects are supported by the grants EB018117-01; EB018108-01; NR014756-01; from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the National Eye Institute (NEI), and the National Institute of Nursing Research (NINR) and by award #1355716 from the National Science Foundation.
When you experience something, neurons in the brain send chemical signals called neurotransmitters across synapses to receptors on other neurons. How well that process unfolds determines how you comprehend the experience and what behaviors might follow. In people with Fragile X syndrome, a third of whom are eventually diagnosed with Autism Spectrum Disorder, that process is severely hindered, leading to intellectual impairments and abnormal behaviors.
In a study published in the online journal PLoS One, a team of UNC School of Medicine researchers led by pharmacologist C.J. Malanga, MD, PhD, describes a major reason why current medications only moderately alleviate Fragile X symptoms. Using mouse models, Malanga discovered that three specific drugs affect three different kinds of neurotransmitter receptors that all seem to play roles in Fragile X. As a result, current Fragile X drugs have limited benefit because most of them only affect one receptor.
Nearly one million people in the United States have Fragile X Syndrome, which is the result of a single mutated gene called FMR1. In people without Fragile X, the gene produces a protein that helps maintain the proper strength of synaptic communication between neurons. In people with Fragile X, FMR1 doesn’t produce the protein, the synaptic connection weakens, and there’s a decrease in synaptic input, leading to mild to severe learning disabilities and behavioral issues, such as hyperactivity, anxiety, and sensitivity to sensory stimulation, especially touch and noise.
More than two decades ago, researchers discovered that – in people with mental and behavior problems – a receptor called mGluR5 could not properly regulate the effect of the neurotransmitter, glutamate. Since then, pharmaceutical companies have been trying to develop drugs that target glutamate receptors. “It’s been a challenging goal,” Malanga said. “No one so far has made it work very well, and kids with Fragile X have been illustrative of this.”
But there are other receptors that regulate other neurotransmitters in similar ways to mGluR5. And there are drugs already available for human use that act on those receptors. So Malanga’s team checked how those drugs might affect mice in which the Fragile X gene has been knocked out.
By electrically stimulating specific brain circuits, Malanga’s team first learned how the mice perceived reward. The mice learned very quickly that if they press a lever, they get rewarded via a mild electrical stimulation. Then his team provided a drug molecule that acts on the same reward circuitry to see how the drugs affect the response patterns and other behaviors in the mice.
His team studied one drug that blocked dopamine receptors, another drug that blocked mGluR5 receptors, and another drug that blocked mAChR1, or M1, receptors. Three different types of neurotransmitters – dopamine, glutamate, and acetylcholine – act on those receptors. And there were big differences in how sensitive the mice were to each drug.
“Turns out, based on our study and a previous study we did with my UNC colleague Ben Philpot, that Fragile X mice and Angelman Syndrome mice are very different,” Malanga said. “And how the same pharmaceuticals act in these mouse models of Autism Spectrum Disorder is very different.”
Malanga’s finding suggests that not all people with Fragile X share the same biological hurdles. The same is likely true, he said, for people with other autism-related disorders, such as Rett syndrome and Angelman syndrome.
“Fragile X kids likely have very different sensitivities to prescribed drugs than do other kids with different biological causes of autism,” Malanga said.
If Alzheimer’s disease is to be treated in the future it requires an early diagnosis, which is not yet possible. Now researchers at higher education institutions such as Linköping University have identified six proteins in spinal fluid that can be used as markers for the illness.
Alzheimer’s causes great suffering and has a one hundred percent fatality rate. The breakdown of brain cells has been in progress for ten years or more by the time symptoms begin to appear. Currently there is no treatment that can stop the process.

(Image: Human neuroblastoma with cell nucleus in blue; beta amyloid as red aggregates within green-tinted lysosomes. Photo: Lotta Agholme.)
Most researchers now agree that one cause of the illness is toxic accumulations – plaques – of the beta amyloid protein. In a healthy brain, the cells are cleansed of such surplus products through lysosomes, the cells’ “waste disposal facilities” (green in the picture).
“In victims of Alzheimer’s, something happens to the lysosomes so that they can’t manage to take care of the surplus of beta amyloid. They fill up with junk that normally is broken down into its component parts and recycled,” says Katarina Kågedal, reader in Experimental Pathology at Linköping University. She led the study that is now being published in Neuromolecular Medicine.
The researchers’ hypothesis was that these changes in the brain’s lysosomal network could be reflected in the spinal fluid, which surrounds the brain’s various parts and drains down into the spinal column. They studied samples of spinal marrow from 20 Alzheimer’s patients and an equal number of healthy control subjects. The screening was aimed at 35 proteins that are associated with the lysosomal network.
“Six of these had clearly increased in the patients; none of them were previously known as markers for Alzheimer’s,” says Kågedal.
Her hope is that the group’s discovery will contribute to early diagnoses of the illness, which is necessary in the first stage in order to be able to begin reliable clinical tests of candidates for drugs. But perhaps the six lysosomal proteins could also be “drug targets” – targets for developing drugs.
“It may be a question of strengthening protection against plaque formation or reactivating the lysosomes so that they manage to break down the plaque,” Kågedal says.
The study was conducted on 20 anonymised, archived spinal marrow samples and the results were confirmed afterwards on an independent range of samples of equal size. All samples were provided by the Laboratory for Clinical Chemistry at Sahlgrenska University Hospital.
Even for people who don’t have diabetes or high blood sugar, those with higher blood sugar levels are more likely to have memory problems, according to a new study published in the October 23, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology.

The study involved 141 people with an average age of 63 who did not have diabetes or pre-diabetes, which is also called impaired glucose tolerance. People who were overweight, drank more than three-and-a-half servings of alcohol per day, and those who had memory and thinking impairment were not included in the study.
The participants’ memory skills were tested, along with their blood glucose, or sugar, levels. Participants also had brain scans to measure the size of the hippocampus area of the brain, which plays an important role in memory.
People with lower blood sugar levels were more likely to have better scores on the memory tests. On a test where participants needed to recall a list of 15 words 30 minutes after hearing them, recalling fewer words was associated with higher blood sugar levels. For example, an increase of about 7 mmol/mol of a long-term marker of glucose control called HbA1c went along with recalling 2 fewer words. People with higher blood sugar levels also had smaller volumes in the hippocampus.
“These results suggest that even for people within the normal range of blood sugar, lowering their blood sugar levels could be a promising strategy for preventing memory problems and cognitive decline as they age,” said study author Agnes Flöel, MD, of Charité University Medicine in Berlin, Germany. “Strategies such as lowering calorie intake and increasing physical activity should be tested.”
Innate ability to identify quantities previews future mathematics performance

Babies who are good at telling the difference between large and small groups of items even before learning how to count are more likely to do better with numbers in the future, according to new research from the Duke Institute for Brain Sciences.
The use of Arabic numerals to represent different values is a characteristic unique to humans, not seen outside our species. But we aren’t born with this skill. Infants don’t have the words to count to 10. So, scientists have hypothesized that the rudimentary sense of numbers in infants is the foundation for higher-level math understanding.
A new study, appearing online in the Oct. 21 Proceedings of the National Academy of Sciences, suggests that children do, in fact, tap into this innate numerical ability when learning symbolic mathematical systems. The Duke researchers found that the strength of an infant’s inborn number sense can be predictive of the child’s future mathematical abilities.
"When children are acquiring the symbolic system for representing numbers and learning about math in school, they’re tapping into this primitive number sense," said Elizabeth Brannon, Ph.D., a professor of psychology and neuroscience, who led the study. "It’s the conceptual building block upon which mathematical ability is built."
Brannon explained that babies come into the world with a rudimentary understanding referred to as a primitive number sense. When looking at two collections of objects, primitive number sense allows them to identify which set is numerically larger even without verbal counting or using Arabic numerals. For example, a person instinctively knows a group of 15 strawberries is more than six oranges, just by glancing.
Understanding how infants and young children conceptualize and understand number can lead to the development of new mathematics education strategies, said Brannon’s colleague, Duke psychology and neuroscience graduate student Ariel Starr. In particular, this knowledge can be used to design interventions for young children who have trouble learning mathematics symbols and basic methodologies.
To test for primitive number sense, Brannon and Starr analyzed 48 6-month-old infants to see whether they could recognize numerical changes, capitalizing on the interest most babies show in things that change. They placed each baby in front of two screens, one that always showed the same number of dots (e.g., eight), changing in size and position, and another that switched between two different numerical values (e.g., eight and 16 dots). All the arrays of dots changed frequently in size and position. In this task, babies that could tell the difference between the two numerical values (e.g., eight and 16) looked longer at the numerically changing screen.
Brannon and Starr then tested the same children at 3.5 years of age with a non-symbolic number comparison game. The children were shown two different arrays and asked to choose which one had more dots without counting them. In addition, the children took a standardized math test scaled for pre-schoolers, as well as a standardized IQ test. Finally, the researchers gave the children a simple verbal task to identify the largest number word each child could concretely understand.
"We found that infants with higher preference scores for looking at the numerically changing screen had better primitive number sense three years later compared to those infants with lower scores," Starr said. "Likewise, children with higher scores in infancy performed better on standardized math tests."
Brannon said the findings point to a real connection between symbolic math and quantitative abilities that are present in infancy before education takes hold and shapes our mathematical abilities.
"Our study shows that infant number sense is a predictor of symbolic math," Brannon said. "We believe that when children learn the meaning of number words and symbols, they’re likely mapping those meanings onto pre-verbal representations of number that they already have in infancy," she said.
"We can’t measure a baby’s number sense ability at 6 months and know how they’ll do on their SATs," Brannon added. "In fact our infant task only explains a small percentage of the variance in young children"s math performance. But our findings suggest that there is cognitive overlap between primitive number sense and symbolic math. These are fundamental building blocks."
Buck Institute study provides insight for new therapeutics that target the interaction between ApoE4 and a Sirtuin protein
The major genetic risk factor for Alzheimer’s disease (AD), present in about two-thirds of people who develop the disease, is ApoE4, the cholesterol-carrying protein that about a quarter of us are born with. But one of the unsolved mysteries of AD is how ApoE4 causes the risk for the incurable, neurodegenerative disease. In research published this week in The Proceedings of the National Academy of Sciences, researchers at the Buck Institute found a link between ApoE4 and SirT1, an “anti-aging protein” that is targeted by resveratrol, present in red wine.
The Buck researchers found that ApoE4 causes a dramatic reduction in SirT1, which is one of seven human Sirtuins. Lead scientists Rammohan Rao, PhD, and Dale Bredesen, MD, founding CEO of the Buck Institute, say the reduction was found both in cultured neural cells and in brain samples from patients with ApoE4 and AD. “The biochemical mechanisms that link ApoE4 to Alzheimer’s disease have been something of a black box. However, recent work from a number of labs, including our own, has begun to open the box,” said Bredesen.
The Buck group also found that the abnormalities associated with ApoE4 and AD, such as the creation of phospho-tau and amyloid-beta, could be prevented by increasing SirT1. They have identified drug candidates that exert the same effect. “This research offers a new type of screen for Alzheimer’s prevention and treatment,” said Rammohan V. Rao, PhD, co-author of the study, and an Associate Research Professor at the Buck. “One of our goals is to identify a safe, non-toxic treatment that could be given to anyone who carries the ApoE4 gene to prevent the development of AD.”
In particular, the researchers discovered that the reduction in SirT1 was associated with a change in the way the amyloid precursor protein (APP) is processed. Rao said that ApoE4 favored the formation of the amyloid-beta peptide that is associated with the sticky plaques that are one of the hallmarks of the disease. He said with ApoE3 (which confers no increased risk of AD), there was a higher ratio of the anti-Alzheimer’s peptide, sAPP alpha, produced, in comparison to the pro-Alzheimer’s amyloid-beta peptide. This finding fits very well with the reduction in SirT1, since overexpressing SirT1 has previously been shown to increase ADAM10, the protease that cleaves APP to produce sAPP alpha and prevent amyloid-beta.
AD affects over 5 million Americans – there are no treatments that are known to cure, or even halt the progression of symptoms that include loss of memory and language. Preventive treatments are particularly needed for the 2.5% of the population that carry two genes for ApoE4, which puts them at an approximate 10-fold higher risk of developing AD, as well as for the 25% of the population with a single copy of the gene. The group hopes that the current work will identify simple, safe therapeutics that can be given to ApoE4 carriers to prevent the development of Alzheimer’s disease.
Poor sleep quality may impact Alzheimer’s disease onset and progression. This is according to a new study led by researchers at the Johns Hopkins Bloomberg School of Public Health who examined the association between sleep variables and a biomarker for Alzheimer’s disease in older adults. The researchers found that reports of shorter sleep duration and poorer sleep quality were associated with a greater β-Amyloid burden, a hallmark of the disease. The results are featured online in the October issue of JAMA Neurology.
“Our study found that among older adults, reports of shorter sleep duration and poorer sleep quality were associated with higher levels of β-Amyloid measured by PET scans of the brain,” said Adam Spira, PhD, lead author of the study and an assistant professor with the Bloomberg School’s Department of Mental Health. “These results could have significant public health implications as Alzheimer’s disease is the most common cause of dementia, and approximately half of older adults have insomnia symptoms.”
Alzheimer’s disease is an irreversible, progressive brain disease that slowly destroys memory and thinking skills. According to the National Institutes of Health, as many as 5.1 million Americans may have the disease, with first symptoms appearing after age 60. Previous studies have linked disturbed sleep to cognitive impairment in older people.
In a cross-sectional study of adults from the neuro-imagining sub-study of the Baltimore Longitudinal Study of Aging with an average age of 76, the researchers examined the association between self-reported sleep variables and β-Amyloid deposition. Study participants reported sleep that ranged from more than seven hours to no more than five hours. β-Amyloid deposition was measured by the Pittsburgh compound B tracer and PET (positron emission tomography) scans of the brain. Reports of shorter sleep duration and lower sleep quality were both associated with greater Αβ buildup.
“These findings are important in part because sleep disturbances can be treated in older people. To the degree that poor sleep promotes the development of Alzheimer’s disease, treatments for poor sleep or efforts to maintain healthy sleep patterns may help prevent or slow the progression of Alzheimer disease,” said Spira. He added that the findings cannot demonstrate a causal link between poor sleep and Alzheimer’s disease, and that longitudinal studies with objective sleep measures are needed to further examine whether poor sleep contributes to or accelerates Alzheimer’s disease.
In a biological quirk that promises to provide researchers with a new approach for studying and potentially treating Fragile X syndrome, scientists at the University of Massachusetts Medical School (UMMS) have shown that knocking out a gene important for messenger RNA (mRNA) translation in neurons restores memory deficits and reduces behavioral symptoms in a mouse model of a prevalent human neurological disease. These results, published today in Nature Medicine, suggest that the prime cause of the Fragile X syndrome may be a translational imbalance that results in elevated protein production in the brain. Restoration of this balance may be necessary for normal neurological function.
"Biology works in strange ways," said Joel Richter, PhD, professor of molecular medicine at UMMS and senior author on the study. "We corrected one genetic mutation with another, which in effect showed that two wrongs make a right. Mutations in each gene result in impaired brain function, but in our studies, we found that mutations in both genes result in normal brain function. This sounds counter-intuitive, but in this case that seems to be what has happened."
Fragile X syndrome, the most common form of inherited mental retardation and the most frequent single-gene cause of autism, is a genetic condition resulting from a CGG repeat expansion in the DNA sequence of the Fragile X (Fmr1) gene required for normal neurological development. People with Fragile X suffer from intellectual disability as well as behavioral and learning challenges. Depending on the length of the CGG repeat, intellectual disabilities can range from mild to severe.
While scientists have identified the genetic mutation that causes Fragile X, on a molecular level they still don’t know much about how the disease works or what precisely goes wrong in the brain as a result. What is known is that the Fmr1 gene codes for the Fragile X protein (FMRP). This protein probably has several functions throughout the neuron but its main activity is to repress the translation of as many as 1,000 different mRNAs. By doing this, FMRP controls synaptic plasticity and higher brain function. Mice without the Fragile X gene, for instance, have a 15 to 20 percent overall elevation in neural protein production. It is thought that the inability to repress mRNA translation and the resulting increase in neural proteins may somehow hamper normal synaptic function in patients with Fragile X. But because FMRP binds so many mRNAs, and some proteins become more elevated than others, parsing which mRNA or combination of mRNAs is responsible for Fragile X pathology is a daunting task.
From Frog Egg to Fragile X
For years, Dr. Richter had been studying how translation, the process in which cellular ribosomes create proteins, went from dormant to active in frog eggs. He discovered the key gene controlling this process, the RNA binding protein CPEB. In 1998, Richter found the CPEB protein in the rodent brain where it played an important role in regulating how synapses talk to each other. At this point, his work began to move from exploring the role of CPEB in the developmental biology of the frog to how the CPEB protein impacted learning and memory. A serendipitous research symposium with colleagues at Cold Spring Harbor got him thinking about CPEB and Fragile X syndrome.
"Here I was, an outsider, a molecular biologist who had worked for years with frog eggs, in the same room with neurobiologists and neurologists, when they started talking about Fragile X syndrome and translational activity," said Richter. "It got me thinking that the CPEB protein might be a path to restoring the translational imbalance they were discussing."
Richter knew that CPEB stimulated translation and that FMRP repressed it. He also knew that animal models lacking the CPEB protein had memory deficits and that both proteins bound to many of the same mRNAs – the overlap may be as higher as 33 percent. The thought was that by taking away a protein that stimulated translation might counterbalance the loss of the repressor FMRP protein, thereby restoring translational homeostasis in the brain and normal neurological function.
"It was one of those kind of goofy ‘what if’ sort of things," said Richter.
To test his hypothesis, Richter developed a double knockout mouse model that lacked both the FMRP gene that caused Fragile X and the CPEB gene. When they began measuring for Fragile X pathologies what they found was almost too good to be true.
"We measured a host of factors, biochemical, morphological, electrophysiological and behavioral phenotypes," said Richter. "And we kept finding the same thing. By knocking out both the FMRP and CPEB genes we were able to restore levels of protein synthesis to normal and corrected the disease characteristics of the Fragile X mice, making them almost indistinguishable from wild type mice."
Most importantly, tests to evaluate short-term memory in the double knockout mice also showed normal results with no indications of Fragile X pathology. This suggested an experiment to test whether CPEB might be a potential therapeutic target for Fragile X to benefit patients. Richter and colleagues took adult Fragile X mice and injected a lentivirus that expresses a small RNA to knock down CPEB in the hippocampus, which is a brain region that is important for short-term memory. Subsequent tests showed improved short-term memory in these mice, indicating that at least this one characteristic of Fragile X syndrome, which is generally thought to be a developmental disorder, can be reversed in adults.
"People with Fragile X make too much protein," said Richter. "By using CPEB to recalibrate the cellular machinery that makes protein we’ve shown that tamping down this process has a profoundly good impact on mouse models with Fragile X. It may be that a similar approach could be beneficial for kids with this disease."
The next step for Richter and colleagues is to determine which, of the more than 300 mRNAs that both CPEB and FMRP bind to, contribute to Fragile X syndrome and how. They’ll also begin looking at small molecules and other avenues that, like the ablation of the CPEB protein, might be able to slow down the synthesis of protein. “There are several small molecules that we know affect the translational apparatus,” Richter said. “Some cross the blood/brain barrier, some are toxic, and some are not. We’d like to investigate those.”
"This is another, great example of how basic science translates to human disease," said Richter. "If we had started out looking at the human brain, not knowing about the CPEB protein and its role in translational activity, we wouldn’t have had any idea where to start or what to look for. But because we started out in the frog, where things are much easier to see, and because more often than not these processes are conserved, we’ve learned something new and totally unexpected that may have a profound impact on human disease."
Many negative effects of drinking, such as transitioning into heavy alcohol use, often take place during adolescence and can contribute to long-term negative health outcomes as well as the development of alcohol use disorders. A new study of adolescent drinking and its genetic and environmental influences has found that different trajectories of adolescent drinking are preceded by discernible gene-parenting interactions, specifically, the mu-opioid receptor (OPRM1) genotype and parental-rule-setting.

Results will be published in the March 2014 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.
"Heavy drinking in adolescence can lead to alcohol-related problems and alcohol dependence later in life," said Carmen Van der Zwaluw, an assistant professor at Radboud University Nijmegen as well as corresponding author for the study. "It has been estimated that 40 percent of adult alcoholics were already heavy drinkers during adolescence. Thus, tackling heavy drinking in adolescence may prevent later alcohol-related problems."
Van der Zwaluw said that both the dopamine receptor D2 (DRD2) and OPRM1 genes are known to play a large role in the neuro-reward mechanisms associated with the feelings of pleasure that result from drinking, as well as from eating, having sex, and the use of other drugs.
"Different genotypes may result in different neural responses to alcohol or different motivations to drink," she said. "For example, OPRM1 G-allele carriers have been shown to experience more positive feelings after drinking, and to drink more often to enhance their mood than people with the OPRM1 AA genotype. In addition, we chose to examine the influence of parental alcohol-specific rules because research has shown that, more than general measures of parental monitoring, alcohol-specific rule-setting has a considerable and consistent effect on adolescents’ drinking behavior."
Van der Zwaluw and her colleagues used data from the Dutch Family and Health study that consisted of six yearly waves, beginning in 2002 and including only adolescents born in the Netherlands. The final sample of 596 adolescents (50% boys) were on average 14.3 years old at Time 1 (T1), 15.3 at T2, 16.3 at T3, 17.7 at T4, 18.7 years at T5, and 19.7 years at T6. Saliva samples were collected in the fourth wave to enable genetic testing. Participants were subsequently divided into three distinct groups of adolescent drinkers; light drinkers (n=346), moderate drinkers (n=178), and heavy drinkers (n=72).
"It was found that adolescent drinkers could be discriminated into three groups: light, moderate, and heavy drinkers," said Van der Zwaluw. "Comparisons between these three groups showed that light drinkers were more often carriers of the OPRM1 AA ‘non-risk’ genotype, and reported stricter parental rules than moderate drinkers. In the heavy drinking group, the G-allele carriers, but not those with the AA-genotype, were largely affected by parental rules: more rules resulted in lower levels of alcohol use."
Van der Zwaluw explained that although evidence for the genetic liability of heavy alcohol use has been shown repeatedly, debate continues over which genes are responsible for this liability, what the causal mechanisms are, and whether and how it interacts with environmental factors. “Longitudinal studies examining the development of alcohol use over time, in a stage of life that often precedes serious alcohol-related problems, can shed more light on these issues,” she said. “This paper confirms important findings of others; showing an association of the OPRM1 G-allele with adolescent alcohol use and an effect of parental rule-setting. Additionally, it adds to the literature by demonstrating that, depending on genotype, adolescents are differently affected by parental rules.”
The bottom line is that parents can be a positive influence, Van der Zwaluw noted. “This study shows that strict parental rules prevent youth from drinking more alcohol,” she said. “However, one should keep in mind that every adolescent responds differently to parenting efforts, and that the effects of parenting may depend on the genetic make-up of the adolescent.”

Features like the wrinkles on your forehead and the way you move may reflect your overall health and risk of dying, according to recent health research. But do physicians consider such details when assessing patients’ overall health and functioning?
In a survey of approximately 1,200 Taiwanese participants, Princeton University researchers found that interviewers — who were not health professionals but were trained to administer the survey — provided health assessments that were related to a survey participant’s risk of dying, in part because they were attuned to facial expressions, responsiveness and overall agility.
The researchers report in the journal Epidemiology that these assessments were even more accurate predictors of dying than assessments made by physicians or even the individuals themselves. The findings show that survey interviewers, who typically spend a fair amount of time observing participants, can glean important information regarding participants’ health through thorough observations.
"Your face and body reveal a lot about your life. We speculate that a lot of information about a person’s health is reflected in their face, movements, speech and functioning, as well as in the information explicitly collected during interviews," said Noreen Goldman, Hughes-Rogers Professor of Demography and Public Affairs in the Woodrow Wilson School.
Together with lead author of the paper and Princeton Ph.D. candidate Megan Todd, Goldman analyzed data collected by the Social Environment and Biomarkers of Aging Study (SEBAS). This study was designed by Goldman and co-investigator Maxine Weinstein at Georgetown University to evaluate the linkages among the social environment, stress and health. Beginning in 2000, SEBAS conducted extensive home interviews, collected biological specimens and administered medical examinations with middle-aged and older adults in Taiwan. Goldman and Todd used the 2006 wave of this study, which included both interviewer and physician assessments, for their analysis. They also included death registration data through 2011 to ascertain the survival status of those interviewed.
The survey used in the study included detailed questions regarding participants’ health conditions and social environment. Participants’ physical functioning was evaluated through tasks that determined, for example, their walking speed and grip strength. Health assessments were elicited from participants, interviewers and physicians on identical five-point scales by asking “Regarding your/the respondent’s current state of health, do you feel it is excellent (5), good (4), average (3), not so good (2) or poor (1)?”
Participants answered this question near the beginning of the interview, before other health questions were asked. Interviewers assessed the participants’ health at the end of the survey, after administering the questionnaire and evaluating participants’ performance on a set of tasks, such as walking a short distance and getting up and down from a chair. And physicians — who were hired by the study and were not the participants’ primary care physicians — provided their assessments after physical exams and reviews of the participants’ medical histories. (Study investigators did not provide special guidance about how to rate overall health to any group.)
In order to understand the many variables that go into predicting mortality, Goldman and Todd factored into their statistical models such socio-demographic variables as sex, place of residence, education, marital status, and participation in social activities. They also considered chronic conditions, psychological wellbeing (such as depressive symptoms) and physical functioning to account for a fuller picture of health.
"Mortality is easy to measure because we have death records indicating when a person has died," Goldman said. "Overall health, on the other hand, is very complicated to measure but obviously very important for addressing health policy issues."
Two unexpected results emerged from Goldman and Todd’s analysis. The first: physicians’ ratings proved to be weak predictors of survival. “The physicians performed a medical exam equivalent to an annual physical exam, plus an abdominal ultrasound; they have specialized knowledge regarding health conditions,” Goldman explained. “Given access to such information, we anticipated stronger, more accurate predictions of death,” she said. “These results call into question previous studies’ assumptions that physicians’ ‘objective health’ ratings are superior to ‘subjective’ ratings provided by the survey participants themselves.”
In a second surprising finding, the team found that interviewers’ ratings were considerably more powerful for predicting mortality than self-ratings. This is likely, Goldman said, because interviewers considered respondents’ movements, appearance and responsiveness in addition to the detailed health information gathered during the interviews. Also, Goldman posits, interviewer ratings are probably less affected by bias than self-reports.
"The ‘self-rated health’ question is religiously used by health researchers and social scientists, and, although it has been shown to predict mortality, it suffers from many biases. People use it because it’s easy and simple,” Goldman continued. "But the problem with self-rated health is that we have no idea what reference group the respondent is using when evaluating his or her own health. Different ethnic and racial groups respond differently as do varying socioeconomic groups. We need other simple ways to rate individual health instead of relying so heavily on self-rated health."
One way, Goldman suggests, is by including interviewer ratings in surveys along with self-ratings: “This is a straightforward and cost-free addition to a questionnaire that is likely to improve our measurement of health in any population,” Goldman said.
Teaching two-legged robots a stable, robust “human” way of walking – this is the goal of the international research project “KoroiBot” with scientists from seven institutions from Germany, France, Israel, Italy and the Netherlands. The experts from the areas of robotics, mathematics and cognitive sciences want to study human locomotion as exactly as possible and transfer this onto technical equipment with the assistance of new mathematical processes and algorithms. The European Union is financing the three-year research project that started in October 2013 with approx. EUR 4.16 million. The scientific coordinator is Prof. Dr. Katja Mombaur from Heidelberg University.

Whether as rescuers in disaster areas, household helps or as “colleagues” in modern work environments: there are numerous possible areas of deployment for humanoid robots in the future. “One of the major challenges on the way is to enable robots to move on two legs in different situations, without an accident – in spite of unknown terrain and also with possible disturbances,” explains Prof. Mombaur, who heads the working group “Optimisation in Robotics and Biomechanics” at Heidelberg University’s Interdisciplinary Center for Scientific Computing (IWR).
In the KoroiBot project the researchers will study the way humans walk e.g. on stairs and slopes, on soft and slippery ground or over beams and seesaws, and create mathematical models. Besides developing new optimisation and learning processes for walking on two legs, they aim to implement this in practice with existing robots. In addition, the research results are to flow into planning new design principles for the next generation of robots.
Besides Prof. Mombaur’s group, the working group “Simulation and Optimisation” is also involved in the project at the IWR. The Heidelberg scientists will investigate the way movement of humans and robots can be turned into mathematical models. Furthermore, the teams want to create optimised walking movements for different demands and develop new model-based control algorithms. Just under EUR 900,000 of the European Union funding is being channelled to Heidelberg.
Partners in the international consortium are, besides Heidelberg University, leading institutions in the field of robotics. These include the Karlsruhe Institute of Technology (KIT), the Centre National de la Recherche Scientifique (CNRS) with three laboratories, the Istituto Italiano di Tecnologia (IIT) and the Delft University of Technology in the Netherlands. Experts from the University of Tübingen and the Weizmann Institute of Science in Israel will contribute from the angle of cognitive sciences.
Besides the targeted use of robotics, the scientists expect possible applications in medicine, e.g. for controlling intelligent artificial limbs. They see further areas of application in designing and regulating exoskeletons as well as in computer animation and in game design.
Joint research from the University of Alabama at Birmingham Department of Psychology and Auburn University indicates that brain scans show signs of autism that could eventually support behavior-based diagnosis of autism and effective early intervention therapies. The findings appear online today in Frontiers in Human Neuroscience as part of a special issue on brain connectivity in autism.

“This research suggests brain connectivity as a neural signature of autism and may eventually support clinical testing for autism,” said Rajesh Kana, Ph.D., associate professor of psychology and the project’s senior researcher. “We found the information transfer between brain areas, causal influence of one brain area on another, to be weaker in autism.”
The investigators found that brain connectivity data from 19 paths in brain scans predicted whether the participants had autism, with an accuracy rate of 95.9 percent.
Kana, working with a team including Gopikrishna Deshpande, Ph.D., from Auburn University’s MRI Research Center, studied 15 high-functioning adolescents and adults with autism, as well as 15 typically developing control participants ages 16-34 years. Kana’s team collected all data in his autism lab at UAB that was then analyzed using a novel connectivity method at Auburn.
The current study showed that adults with autism spectrum disorders processed social cues differently than typical controls. It also revealed the disrupted brain connectivity that explains their difficulty in understanding social processes.
“We can see that there are consistently weaker brain regions due to the disrupted brain connectivity,” Kana said. “There’s a very clear difference.”
Participants in this study were asked to choose the most logical of three possible endings as they watched a series of comic strip vignettes while a functional MRI scanner measured brain activity.
The scenes included a glass about to fall off a table and a man enjoying the music of a street violinist and giving him a cash tip. Most participants in the autism group had difficulty in finding a logical end to the violinist scenario, which required an understanding of emotional and mental states. The current study showed that adults with autism spectrum disorders struggle to process subtle social cues, and altered brain connectivity may underlie their difficulty in understanding social processes.
“We can see that the weaker connectivity hinders the cross-talk among brain regions in autism,” Kana said.
Kana plans to continue his research on autism.
“Over the next five to 10 years, our research is going in the direction of finding objective ways to supplement the diagnosis of autism with medical testing and testing the effectiveness of intervention in improving brain connectivity,” Kana said.
Autism is currently diagnosed through interviews and behavioral observation. Although autism can be diagnosed by 18 months, in reality, earliest diagnoses occur around ages 4-6 as children face challenges in school or social settings.
“Parents usually have a longer road before getting a firm diagnosis for their child now,” Kana said. “You lose a lot of intervention time, which is so critical. Brain imaging may not be able to replace the current diagnostic measures; but if it can supplement them at an earlier age, that’s going to be really helpful.”
Video-based teaching helps teens with autism learn important social skills, and the method eventually could be used widely by schools with limited resources, a Michigan State University researcher says.
The diagnosis rate for Autism Spectrum Disorder for 14- to 17-year-olds has more than doubled in the past five years, according to the Centers for Disease Control and Prevention. Yet previous research has found very few strategies for helping adolescents with autism develop skills needed to be successful, especially in group settings.
“Teaching social skills to adolescents with ASD has to be effective and practical,” said Joshua Plavnick, assistant professor of special education at MSU. “Using video-based group instruction regularly could promote far-reaching gains for students with ASD across many social behaviors.”
Plavnick developed group video teaching techniques with colleagues while a postdoctoral fellow at the University of North Carolina’s Frank Porter Graham Child Development Institute. Their findings are published in the research journal Exceptional Children.
Previous studies have shown many people with autism are more likely to pay attention when an innovative technology delivers information. Before Plavnick’s work, however, there were no investigations of video modeling as an option for teaching social skills to more than one adolescent with ASD at the same time.
The team recruited 13- to 17-year-old students with ASD and used laptops or iPads to offer group video instruction on social behaviors, such as inviting a peer to join an activity. One facilitator showed four students video footage of people helping one another clean up a mess, for example, and then gave them opportunities to practice the same skills in the classroom.
According to the researchers, the students demonstrated a rapid increase in the level of complex social behaviors each time video-based group instruction was used. Students sustained those social behaviors at high levels, even when the videos were used less often.
The students’ parents also completed anonymous surveys and indicated high levels of satisfaction. One reported their child started asking family members to play games together, a skill the teen had never before displayed at home.
Most schools do not have appropriate staff resources to provide one-on-one help for students with autism. The video can be used with a small group all at once and has been shown to be effective.
“Video-based group instruction is important, given the often limited resources in schools that also face increasing numbers of students being diagnosed with ASD,” said Plavnick, who also has begun implementing the strategy as part of a daily high school-based program.
NIH-funded study suggests sleep clears brain of molecules associated with neurodegeneration

A good night’s rest may literally clear the mind. Using mice, researchers showed for the first time that the space between brain cells may increase during sleep, allowing the brain to flush out toxins that build up during waking hours. These results suggest a new role for sleep in health and disease. The study was funded by the National Institute of Neurological Disorders and Stroke (NINDS), part of the NIH.
“Sleep changes the cellular structure of the brain. It appears to be a completely different state,” said Maiken Nedergaard, M.D., D.M.Sc., co-director of the Center for Translational Neuromedicine at the University of Rochester Medical Center in New York, and a leader of the study.
For centuries, scientists and philosophers have wondered why people sleep and how it affects the brain. Only recently have scientists shown that sleep is important for storing memories. In this study, Dr. Nedergaard and her colleagues unexpectedly found that sleep may be also be the period when the brain cleanses itself of toxic molecules.
Their results, published in Science, show that during sleep a “plumbing” system, called the glymphatic system, may open, letting fluid flow rapidly through brain. Dr. Nedergaard’s lab recently discovered the glymphatic system helps control whether cerebrospinal fluid (CSF), a clear liquid surrounding the brain and spinal cord, flows through the brain.
“It’s as if Dr. Nedergaard and her colleagues have uncovered a network of hidden caves and these exciting results highlight the potential importance of the network in normal brain function,” said Roderick Corriveau, Ph.D., a program director at NINDS.
Initially the researchers studied the system by injecting dye into the CSF of mice and watching it flow through their brains while simultaneously monitoring electrical brain activity. The dye flowed rapidly when the mice were unconscious, either asleep or anesthetized. In contrast, the dye barely flowed when the same mice were awake.
“We were surprised by how little flow there was into the brain when the mice were awake,” said Dr. Nedergaard. “It suggested that the space between brain cells changed greatly between conscious and unconscious states.”
To test this idea, the researchers inserted electrodes into the brain to directly measure the space between brain cells. They found that the space inside the brains increased by 60 percent when the mice were asleep or anesthetized.
“These are some dramatic changes in extracellular space,” said Charles Nicholson, Ph.D., a professor at New York University’s Langone Medical Center and an expert in measuring the dynamics of brain fluid flow and how it influences nerve cell communication.
Certain brain cells, called glia, control flow through the glymphatic system by shrinking or swelling. Noradrenaline is an arousing hormone that is also known to control cell volume. Treating awake mice with drugs that block noradrenaline induced sleep and increased brain fluid flow and the space between cells, further supporting the link between the glymphatic system and sleep.
Previous studies suggest that toxic molecules involved in neurodegenerative disorders accumulate in the space between brain cells. In this study, the researchers tested whether the glymphatic system controls this by injecting mice with radiolabeled beta-amyloid, a protein associated with Alzheimer’s disease, and measuring how long it lasted in their brains when they were asleep or awake. Beta-amyloid disappeared faster in mice brains when the mice were asleep, suggesting sleep normally clears toxic molecules from the brain.
“These results may have broad implications for multiple neurological disorders,” said Jim Koenig, Ph.D., a program director at NINDS. “This means the cells regulating the glymphatic system may be new targets for treating a range of disorders.”
The results may also highlight the importance of sleep.
“We need sleep. It cleans up the brain,” said Dr. Nedergaard.
Johns Hopkins scientists have developed new drugs that — at least in a laboratory dish — appear to halt the brain-destroying impact of a genetic mutation at work in some forms of two incurable diseases, amyotrophic lateral sclerosis (ALS) and dementia.
They made the finding by using neurons they created from stem cells known as induced pluripotent stem cells (iPS cells), which are derived from the skin of people with ALS who have a gene mutation that interferes with the process of making proteins needed for normal neuron function.
“Efforts to treat neurodegenerative diseases have the highest failure rate for all clinical trials,” says Jeffrey D. Rothstein, M.D., Ph.D., a professor of neurology and neuroscience at the Johns Hopkins University School of Medicine and leader of the research described online in the journal Neuron. “But with this iPS technology, we think we can target an exact subset of patients with a specific mutation and succeed. It’s individualized brain therapy, just the sort of thing that has been done in cancer, but not yet in neurology.”
Scientists in 2011 discovered that more than 40 percent of patients with an inherited form of ALS and at least 10 percent of patients with the non-inherited sporadic form have a mutation in the C9ORF72 gene. The mutation also occurs very often in people with frontotemporal dementia, the second-most-common form of dementia after Alzheimer’s disease. The same research appeared to explain why some people develop both ALS and the dementia simultaneously and that, in some families, one sibling might develop ALS while another might develop dementia.
In the C9ORF72 gene of a normal person, there are up to 30 repeats of a series of six DNA letters (GGGGCC); but in people with the genetic glitch, the string can be repeated thousands of times. Rothstein, who is also director of the Johns Hopkins Brain Science Institute and the Robert Packard Center for ALS Research, used his large bank of iPS cell lines from ALS patients to identify several with the C9ORF72 mutation, then experimented with them to figure out the mechanism by which the “repeats” were causing the brain cell death characteristic of ALS.
In a series of experiments, Rothstein says, they discovered that in iPS neurons with the mutation, the process of using the DNA blueprint to make RNA and then produce protein is disrupted. Normally, RNA-binding proteins facilitate the production of RNA. Instead, in the iPS neurons with the C9ORF72 mutation, the RNA made from the repeating GGGGCC strings was bunching up, gumming up the works by acting like flypaper and grabbing hold of the extremely important RNA binding proteins, including one known as ADARB2, needed for the proper production of many other cellular RNAs. Overall, the C9ORF72 mutation made the cell produce abnormal amounts of many other normal RNAs and made the cells very sensitive to stress.
To counter this effect, the researchers developed a number of chemical compounds targeting the problem. This compound behaved like a coating that matches up to the GGGGCC repeats like velcro, keeping the flypaper-like repeats from attracting the bait, allowing the RNA-binding protein to properly do its job.
Rothstein says Isis Pharmaceuticals helped develop many of the studied compounds and, by working closely with the Johns Hopkins teams, could begin testing it in human ALS patients with the C9ORF72 mutation in the next several years. In collaboration with the National Institutes of Health, plans are already underway to begin to identify a group of patients with the C9ORF72 mutation for future research.
Rita Sattler, Ph.D., an assistant professor of neurology at Johns Hopkins and the co-investigator of the study, says without iPS technology, the team would have had a difficult time studying the C9ORF72 mutation. “Typically, researchers engineer rodents with mutations that mimic the human glitches they are trying to research and then study them,” she says. “But the nature of the multiple repeats made that nearly impossible.” The iPS cells did the job just as well or even better than an animal model, Sattler says, in part because the experiments could be done using human cells.
“An iPS cell line can be used effectively and rapidly to understand disease mechanisms and as a tool for therapy development,” Rothstein adds. “Now we need to see if our findings translate into a valuable treatment for humans.”
The researchers also analyzed brain tissue from people with the C9ORF72 mutation who died of ALS. They saw evidence of this bunching up and found that the many genes that were altered as a consequence of this mutation in the iPS cells were also abnormal in the brain tissue, thereby showing that iPS cells can be a faithful tool to study the human disease and discover effective therapies.
In the future, the scientists will look at cerebral spinal fluid from ALS patients with the C9ORF72 mutation, searching for proteins that were found both in the fluid and the iPS cells. These may pave the way to develop markers that can be studied by clinicians to see if the treatment is working once the drug therapy is moved to clinical trials.
ALS, sometimes known as Lou Gehrig’s disease, named for the Yankee baseball great who died from it, destroys nerve cells in the brain and spinal cord that control voluntary muscle movement. The nerve cells waste away or die, and can no longer send messages to muscles, eventually leading to muscle weakening, twitching and an inability to move the arms, legs and body. Onset is typically around age 50 and death often occurs within three to five years of diagnosis. Some 10 percent of cases are hereditary. There is no cure for ALS and there is only one FDA-approved drug treatment, which has just a small effect in slowing disease progression and increasing survival, Rothstein notes.
Research in mouse whiskers reveals signal pathway from touch neuron to brain

Human fingertips have several types of sensory neurons that are responsible for relaying touch signals to the central nervous system. Scientists have long believed these neurons followed a linear path to the brain with a “labeled-lines” structure.
But new research on mouse whiskers from Duke University reveals a surprise — at the fine scale, the sensory system’s wiring diagram doesn’t have a set pattern. And it’s probably the case that no two people’s touch sensory systems are wired exactly the same at the detailed level, according to Fan Wang, Ph.D., an associate professor of neurobiology in the Duke Medical School.
The results, which appear online in Cell Reports, highlight a “one-to-many, many-to-one” nerve connectivity strategy. Single neurons send signals to multiple potential secondary neurons, just as signals from many neurons can converge onto a secondary neuron. Many such connections are likely formed by chance, Wang said. This connectivity scheme allows the touch system to have many possible combinations to encode a large repertoire of textures and forms.
"We take our sense of touch for granted," Wang said. "When you speak, you are not aware of the constant tactile feedback from your tongue and teeth. Without such feedback, you won’t be able to say the words correctly. When you write with a pen, you’re mostly unaware of the sensors telling you how to move it."
It’s not feasible to visualize the touch pathways in the human brain at high resolutions. So, Wang and her collaborators from the University of Tsukuba in Japan and the Friedrich Miescher Institute for Biomedical Research in Switzerland used the whiskers of laboratory mice to map how distinct sensor neurons, presumably detecting different mechanical stimuli, are connected to signal the brain. When the sensory neurons are activated, they send the signal along an axon — a long, slender nerve fiber than conducts electric impulses to the brain. The researchers traced signals running the long path from the mouse’s whiskers to the brain.
Wang’s group used a combination of genetic engineering and fluorescent tags delivered by viruses to color-code four different kinds of neurons and map their connections.
Earlier work by Wang and others had found that all of the 100 to 200 sensors associated with a single whisker project their axons to a large structure representing that whisker in the brain. Each whisker has its own neural representation structure.
"People have thought that within the large whisker-representing structure, there will be finer-scale, labeled lines," Wang said. "In other words, different touch sensors would send information through separate parallel pathways, into that large structure. But surprisingly, we did not find such organized pathways. Instead, we found a completely unorganized mosaic pattern of connections within the large structure. Information from different sensors is intermixed already at the first relay station inside the brain."
Wang said the next step will be to stimulate the labeled circuits in different ways to see how impulses travel the network.
"We want to figure out the exact functions and signals transmitted by different sensors during natural tactile behaviors and determine their exact roles on the perception of textures," she said.