The word “chaperone” refers to an adult who keeps teenagers from acting up at a dance or overnight trip. It also describes a type of protein that can guard the brain against its own troublemakers: misfolded proteins that are involved in several neurodegenerative diseases.

Researchers at Emory University School of Medicine have demonstrated that as animals age, their brains are more vulnerable to misfolded proteins, partly because of a decline in chaperone activity.
The researchers were studying a model of spinocerebellar ataxia, but the findings have implications for understanding other diseases, such as Alzheimer’s, Parkinson’s and Huntington’s. They also identified targets for potential therapies: bolstering levels of either a particular chaperone or a growth factor in brain cells can protect against the toxic effects of misfolded proteins.
The results were published this week in the journal Neuron.
Scientists led by Shihua Li, MD, and Xiao-Jiang Li, MD, PhD devised a system in which production of a misfolding-prone protein that causes a form of spinocerebellar ataxia can be triggered artificially in mice at various ages. Both Li’s are professors of human genetics at Emory University School of Medicine. The first author of the paper is BCDB graduate student Su Yang.
Spinocerebellar ataxia is an inherited neurodegenerative disease in which patients develop gait problems and a loss of coordination in mid-life, because of atrophy of the cerebellum. There are several types, each caused by a mutation in a different gene.
Most of the mutations that cause spinocerebellar ataxia involve an expansion of a “polyglutamine repeat" in a protein. Having the same protein building block (the amino acid glutamine) repeated dozens of times alters the protein’s function and makes it more likely to misfold and clump together. The misfolded proteins are toxic and interfere with the normal forms of the same protein.
Huntington’s disease is caused by a similar polyglutamine repeat. Misfolded proteins also play roles in Alzheimer’s and Parkinson’s, although their production is not driven by an inherited polyglutamine repeat in those diseases.
Li’s team was trying to distinguish between two possibilities. One was that the duration of mutant protein accumulation is important for disease severity; aging might allow more misfolded proteins to accumulate and become toxic over time.
Instead, the scientists observed that older animals develop disease more quickly after mutant protein production is triggered. The mutant protein accumulates more quickly in 9- and 14-month old mice than in 3-month old mice, suggesting that aged neurons are more vulnerable to the effects of the misfolded protein.
Chaperones are proteins whose job is to “prevent improper liaisons" between other proteins; they prevent the sticky regions of proteins from grabbing something they’re not supposed to. Li’s team identified a particular chaperone called Hsc70 whose activity declines with age in the brain, while others’ activity does not.
To confirm Hsc70’s importance, the researchers showed that boosting cells’ levels of Hsc70 can bolster their ability to cope with misfolded proteins. Injecting mice in the cerebellum with a virus that forces cells to make more Hsc70 can slow degeneration. The researchers found that the mutant protein interferes with production of a growth factor called MANF (mesenchephalic astrocyte-derived neurotrophic factor) in the cerebellum and that Hsc70 can prevent this interference. Injection of a virus that forces cells to make more MANF can also slow degeneration.
Potentially, small molecules that increase Hsc70 or MANF levels could be used for treating spinocerebellar ataxia, says Xiao-Jiang Li.
Researchers led by Marta Barrachina, Institute of Neuropathology of the Bellvitge Biomedical Research Institute (IDIBELL) have identified a new subgroup of patients suffering from schizophrenia characterized by motor disorders.

The study, which was conducted in collaboration with the research team Mairena Martin at the University of Castilla La Mancha at Ciudad Real and clinical researchers of the Health Park Sant Joan de Deu at Sant Boi de Llobregat, has been published in the online edition of the Journal of Psychiatric Research and was funded by the TV3 Marathon in its 2008 edition.
Schizophrenia is a serious mental illness. From a clinical point of view is considered grouping several diseases that are not well defined or characterized by biomarkers.
Barrachina team studies the A2A adenosine receptor, which is highly expressed in the basal ganglia at the central nervous system and is involved in the control of movement. Furthermore this protein inhibits the activity of dopamine D2 receptor, hyperactivated in schizophrenia patients and typical antipsychotics target.
"We studied the post- mortem brains of patients," explains Barrachina "and we found that 50% had very low levels of adenosine A2A receptor. Interestingly, when comparing these data with clinical information provided by the clinical investigators of the study, we note that these patients had motor disorders." "In addition, we identified an epigenetic mechanism associated with the decreased receptor expression."
According to the researcher, this finding allows to “identify a new subset of schizophrenia patients with motor disorders.”
Proposal for combined therapy
This study opens the door to a clinical trial, based on radioimage, which would detect the levels of this protein and identify these patients and also to confirm the results obtained in the postmortem brains of patients. Barrachina team proposes to apply a specific combination therapy of antipsychotics and agonists of A2A adenosine. “Thus, the activity of adenosine A2A receptor will be favoured, reducing the dose of antipsychotics.”
Assessing structural and functional changes in the brain may predict future memory performance in healthy children and adolescents, according to a study appearing January 29 in The Journal of Neuroscience. The findings shed new light on cognitive development and suggest MRI and other tools may one day help identify children at risk for developmental challenges earlier than current testing methods allow.

Working memory capacity — the ability to hold onto information for a short period of time — is one of the strongest predictors of future achievements in math and reading. While previous studies showed that MRI could predict current working memory performance in children, scientists were unsure if MRI could predict their future cognitive capacity.
In the current study, Henrik Ullman, Rita Almeida, PhD, and Torkel Klingberg, MD, PhD, at the Karolinska Institutet in Sweden evaluated the cognitive abilities of a group of healthy children and adolescents and measured each child’s brain structure and function using MRI. Based on the MRI data collected during this initial testing, the researchers found they could predict the children’s working memory performance two years later, a prediction that was not possible using the cognitive tests.
“Our results suggest that future cognitive development can be predicted from anatomical and functional information offered by MRI above and beyond that currently achieved by cognitive tests,” said Ullman, the lead author of the study. “This has wide implications for understanding the neural mechanisms of cognitive development.”
The scientists recruited 62 children and adolescents between the ages of 6 and 20 years to the lab, where they completed working memory and reasoning tests. They also received multiple MRI scans to assess brain structure and changes in brain activity as they performed a working memory task. Two years later, the group returned to the lab to perform the same cognitive tests.
Using a statistical model, the researchers evaluated whether MRI data obtained during the initial tests correlated with the children’s working memory performance during the follow-up visit. They found that while brain activity in the frontal cortex correlated with children’s working memory at the time of the initial tests, activity in the basal ganglia and thalamus predicted how well children scored on the working memory tests two years later.
“This study is another contribution to the growing body of neuroimaging research that yields insights into unraveling present and predicting future cognitive capacity in development,” said Judy Illes, PhD, a neuroethicist at the University of British Columbia. “However, the appreciation of this important new knowledge is simpler than its application to everyday life. How a child performs today and tomorrow relies on multiple positive and negative life events that cannot be assessed by today’s technology alone.”

Image: Fotolia
Deprivation of vision during critical periods of childhood development has long been thought to result in irreversible vision loss. Now, researchers from the Schepens Eye Research Institute/Massachusetts Eye and Ear, Harvard Medical School (HMS) and Massachusetts Institute of Technology (MIT) have challenged that theory by studying a unique population of pediatric patients who were blind during these critical periods before removal of bilateral cataracts. The researchers found improvement after sight onset in contrast sensitivity tests, which measure basic visual function and have well-understood neural underpinnings. Their results show that the human visual system can retain plasticity beyond critical periods, even after early and extended blindness. Their findings were recently published in the Proceedings of the National Advancement of Science (PNAS) Early Edition.
A chemical that’s found in fruits and vegetables from strawberries to cucumbers appears to stop memory loss that accompanies Alzheimer’s disease in mice, scientists at the Salk Institute for Biological Studies have discovered. In experiments on mice that normally develop Alzheimer’s symptoms less than a year after birth, a daily dose of the compound—a flavonol called fisetin—prevented the progressive memory and learning impairments. The drug, however, did not alter the formation of amyloid plaques in the brain, accumulations of proteins which are commonly blamed for Alzheimer’s disease. The new finding suggests a way to treat Alzheimer’s symptoms independently of targeting amyloid plaques.

"We had already shown that in normal animals, fisetin can improve memory," says Pamela Maher, a senior staff scientist in Salk’s Cellular Neurobiology Laboratory who led the new study. "What we showed here is that it also can have an effect on animals prone to Alzheimer’s."
More than a decade ago, Maher discovered that fisetin helps protect neurons in the brain from the effects of aging. She and her colleagues have since—in both isolated cell cultures and mouse studies—probed how the compound has both antioxidant and anti-inflammatory effects on cells in the brain. Most recently, they found that fisetin turns on a cellular pathway known to be involved in memory.
"What we realized is that fisetin has a number of properties that we thought might be beneficial when it comes to Alzheimer’s," says Maher.
So Maher—who works with Dave Schubert, the head of the Cellular Neurobiology Lab—turned to a strain of mice that have mutations in two genes linked to Alzheimer’s disease. The researchers took a subset of these mice and, when they were only three months old, began adding fisetin to their food. As the mice aged, the researchers tested their memory and learning skills with water mazes. By nine months of age, mice that hadn’t received fisetin began performing more poorly in the mazes. Mice that had gotten a daily dose of the compound, however, performed as well as normal mice, at both nine months and a year old.
"Even as the disease would have been progressing, the fisetin was able to continue preventing symptoms," Maher says.
In collaboration with scientists at the University of California, San Diego, Maher’s team next tested the levels of different molecules in the brains of mice that had received doses of fisetin and those that hadn’t. In mice with Alzheimer’s symptoms, they found, pathways involved in cellular inflammation were turned on. In the animals that had taken fisetin, those pathways were dampened and anti-inflammatory molecules were present instead. One protein in particular—known as p35—was blocked from being cleaved into a shorter version when fisetin was taken. The shortened version of p35 is known to turn on and off many other molecular pathways. The results were published December 17, 2013, in the journal Aging Cell.
Studies on isolated tissue had hinted that fisetin might also decrease the number of amyloid plaques in Alzheimer’s affected brains. However, that observation didn’t hold up in the mice studies. “Fisetin didn’t affect the plaques,” says Maher. “It seems to act on other pathways that haven’t been seriously investigated in the past as therapeutic targets.”
Next, Maher’s team hopes to understand more of the molecular details on how fisetin affects memory, including whether there are targets other than p35.
"It may be that compounds like this that have more than one target are most effective at treating Alzheimer’s disease," says Maher, "because it’s a complex disease where there are a lot of things going wrong."
They also aim to develop new studies to look at how the timing of fisetin doses affect its influence on Alzheimer’s.
"The model that we used here was a preventive model," explains Maher. "We started the mice on the drugs before they had any memory loss. But obviously human patients don’t go to the doctor until they are already having memory problems." So the next step in moving the discovery toward the clinic, she says, is to test whether fisetin can reverse declines in memory once they have already appeared.
It has long been believed that a person with a concussion should stay awake or not sleep for more than a few hours at a time.

But there appears to be no medical evidence to support that idea, according to a study regarding the relationship between traumatic brain injury, also known as TBI, and sleepiness conducted by scientists at Barrow Neurological Institute at Phoenix Children’s Hospital and the University of Arizona College of Medicine – Phoenix.
"This translational research study lays the foundation for understanding the immediate impact of brain injury on a person’s physiology. In this case, substantial post-traumatic sleep occurred regardless of injury timing or severity," said Jonathan Lifshitz, director of the Translational Neurotrauma Program at Barrow Neurological Institute at Phoenix Children’s Hospital and an associate professor at the UA College of Medicine – Phoenix. "These studies explore sleep as an immediate response to TBI."
Traumatic brain injury is a major cause of death and disability throughout the world with little pharmacological treatment for the individuals who suffer from lifelong problems associated with TBI. Clinical studies have provided evidence to support the claim that brain injury contributes to chronic sleep disturbances as well as excessive daytime sleepiness. Clinical observations have reported excessive sleepiness immediately following traumatic brain injury. However; there is a lack of experimental evidence to support or refute the benefit of sleep following a brain injury.
"We know that some individuals after a traumatic brain injury become excessively sleepy and some cannot sleep at all. It is not well understood why this occurs as mechanisms of injury, and locations of injury are not always consistent between clinical phenotypes of normal sleep, hypersomnia and insomnia," said Matthew Troester, a neurologist and sleep specialist at Phoenix Children’s Hospital and a clinical assistant professor at the UA College of Medicine – Phoenix.
Lifshiz and his associates are breaking new ground with descriptions of sleep in the acute – or immediately after injury – state, where little is known clinically, Troester added.
"They demonstrate that the subjects slept immediately and similarly post-injury no matter the severity of the injury or time of day the injury occurred. This tells us that the brain is reacting to the injury in a very specific manner – not something we always see clinically – and, ultimately, this may help us better understand what the role of sleep is in brain injury" such as being restorative, protective or merely a consequence of the injury, he said. "It is an exciting beginning."
This initial study is phase one of the Post-Traumatic Sleep Study. Phase two is in the works. The research will look to provide medical evidence for sleeping after a concussion.
When you learn how to play the piano, first you have to learn notes, scales and chords and only then will you be able to play a piece of music. The same principle applies to speech and to reading, where instead of scales you have to learn the alphabet and the rules of grammar.

But how do separate small elements come together to become a unique and meaningful sequence?
It has been shown that a specific area of the brain, the basal ganglia, is implicated in a mechanism called chunking, which allows the brain to efficiently organise memories and actions. Until now little was known about how this mechanism is implemented in the brain.
In an article published today (Jan 26th) in Nature Neuroscience, neuroscientist Rui Costa, and his postdoctoral fellow, Fatuel Tecuapetla, both working at the Champalimaud Neuroscience Programme (CNP) in Lisbon, Portugal, and Xin Jin, an investigator at the Salk Institute, in San Diego, USA, reveal that neurons in the basal ganglia can signal the concatenation of individual elements into a behavioural sequence.
"We trained mice to perform gradually faster sequences of lever presses, similar to a person who is learning to play a piano piece at an increasingly fast pace." explains Rui Costa. "By recording the neural activity in the basal ganglia during this task we found neurons that seem to treat a whole sequence of actions as a single behaviour."
The basal ganglia encompass two major pathways, the direct and the indirect pathways. The authors found that although activity in these pathways was similar during the initiation of movement, it was rather different during the execution of a behavioural sequence.
"The basal ganglia and these pathways are absolutely crucial for the execution of actions. These circuits are affected in neural disorders, such as Parkinson or Huntington’s disease, in which learning of action sequences is impaired", adds Xin Jin.
The work published in this article “is just the beginning of the story”, says Rui Costa. The Neurobiology of Action laboratory at the CNP, a group of around 20 researchers headed by Rui Costa, will continue to study the functional organisation of the basal ganglia during learning and execution of action sequences. Earlier this year, Rui Costa was awarded a 2 million euro Consolidation Grant by the European Research Council to study the mechanism of Chunking.
From the world of nanotechnology we’ve gotten electronic skin, or e-skin, and electronic eye implants or e-eyes. Now we’re on the verge of electronic whiskers. Researchers with Berkeley Lab and the University of California (UC) Berkeley have created tactile sensors from composite films of carbon nanotubes and silver nanoparticles similar to the highly sensitive whiskers of cats and rats. These new e-whiskers respond to pressure as slight as a single Pascal, about the pressure exerted on a table surface by a dollar bill. Among their many potential applications is giving robots new abilities to “see” and “feel” their surrounding environment.

“Whiskers are hair-like tactile sensors used by certain mammals and insects to monitor wind and navigate around obstacles in tight spaces,” says the leader of this research Ali Javey, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of electrical engineering and computer science. “Our electronic whiskers consist of high-aspect-ratio elastic fibers coated with conductive composite films of nanotubes and nanoparticles. In tests, these whiskers were 10 times more sensitive to pressure than all previously reported capacitive or resistive pressure sensors.”
Javey and his research group have been leaders in the development of e-skin and other flexible electronic devices that can interface with the environment. In this latest effort, they used a carbon nanotube paste to form an electrically conductive network matrix with excellent bendability. To this carbon nanotube matrix they loaded a thin film of silver nanoparticles that endowed the matrix with high sensitivity to mechanical strain.
“The strain sensitivity and electrical resistivity of our composite film is readily tuned by changing the composition ratio of the carbon nanotubes and the silver nanoparticles,” Javey says. “The composite can then be painted or printed onto high-aspect-ratio elastic fibers to form e-whiskers that can be integrated with different user-interactive systems.”
Javey notes that the use of elastic fibers with a small spring constant as the structural component of the whiskers provides large deflection and therefore high strain in response to the smallest applied pressures. As proof-of-concept, he and his research group successfully used their e-whiskers to demonstrate highly accurate 2D and 3D mapping of wind flow. In the future, e-whiskers could be used to mediate tactile sensing for the spatial mapping of nearby objects, and could also lead to wearable sensors for measuring heartbeat and pulse rate.
“Our e-whiskers represent a new type of highly responsive tactile sensor networks for real time monitoring of environmental effects,” Javey says. “The ease of fabrication, light weight and excellent performance of our e-whiskers should have a wide range of applications for advanced robotics, human-machine user interfaces, and biological applications.”
A paper describing this research has been published in the Proceedings of the National Academy of Sciences. The paper is titled “Highly sensitive electronic whiskers based on patterned carbon nanotube and silver nanoparticle composite films.” Javey is the corresponding author. Co-authors are Kuniharu Takei, Zhibin Yu, Maxwell Zheng, Hiroki Ota and Toshitake Takahashi.
Increased inflammation following an infection impairs the brain’s ability to form spatial memories – according to new research. The impairment results from a decrease in glucose metabolism in the brain’s memory centre, disrupting the neural circuits involved in learning and memory.
Inflammation has long been linked to disorders of memory like Alzheimer’s disease. Severe infections can also impair cognitive function in healthy elderly individuals. The new findings published in the journal Biological Psychiatry help explain why inflammation impairs memory and could spur the development of new drugs targeting the immune system to treat dementia.
In the first trial to image how inflammation impairs human memory, the team at Brighton and Sussex Medical School scanned 20 participants before and after either a benign salty water injection or typhoid vaccination, used to induce inflammation. Positron emission tomography (PET) was used to measure the effects of inflammation on the consumption of glucose in the brain and after each scan participants tested their spatial memory by performing a series of tasks in a virtual reality.
A reduction in glucose metabolism within the brain’s memory centre, known as the Medial Temporal Lobe (MTL), was seen following inflammation. Participants also performed less well in spatial memory tasks, an effect that appeared to be directly mediated by the change in MTL metabolism.
"We have known for some time that severe infections can lead to long-term cognitive impairment in the elderly. Infections are also a common trigger for acute decline in function in patients with dementia and Alzheimer’s disease," explains Dr Neil Harrison, a Wellcome Trust Intermediate Clinical Fellow at BSMS who led the study. "This study suggests that catching a cold or the flu, which leads to inflammation in the brain, could impair our memory."
Infections are unlikely to cause long-term detrimental impact in the young and healthy but the findings are of great significance in the elderly. The team now plan to investigate the role of inflammation in dementia, including insight into how acute infections such as influenza influence the rate of progression and decline.
"Our findings suggest that the brain’s memory circuits are particularly sensitive to inflammation and help clarify the association between inflammation and decline in dementia," says Dr Harrison. "If we can control levels of inflammation, we may be able to reduce the rate of decline in patients’ cognition."
Infants born prematurely are at elevated risk for cognitive, motor, and behavioral deficits — the severity of which was, until recently, almost impossible to accurately predict in the neonatal period with conventional brain imaging technology. But physicians may now be able to identify the premature infants most at risk for deficits as well as the type of deficit, enabling them to quickly initiate early neuroprotective therapies, by using highly reliable 3-D MRI imaging techniques developed by clinician scientists at The Research Institute at Nationwide Children’s Hospital. The imaging technique also facilitates early and repeatable assessments of these therapies to help clinicians and researchers determine whether neuroprotective treatments are effective in a matter of weeks, instead of the two to five years previously required.
The researchers — experts in brain imaging and anatomy — developed a protocol for using the special imaging technique to study the development of 10 brain tracts in these tiny patients, work published online January 24 in PLOS One. Colorful 3-D images of each tract revealed the connections of the segments to different parts of the brain or the spinal cord. Each of the 10 tracts is important for certain functions and abilities, such as language, movement or vision.
“Developing a reliable and reproducible methodology for studying the premature brain was crucial in order for us to get to the next step: assessing neuroprotective therapies,” said Nehal A. Parikh, DO, principal investigator in the Center for Perinatal Research at Nationwide Children’s and senior author on the paper. “Now that we have this protocol, we can improve the standard of care and evaluate efforts to promote brain health within 8 to 12 weeks of beginning the interventions. That way, we can quickly see what really works.”
The study tested a detailed approach to measuring brain structure in extremely low birth weight infants at term-equivalent age by comparing their diffusion tensor tractography (DTT) scans to those of healthy, full-term newborns. DTT is a special MRI technique that produces 3-D images and is able to detect the brain’s structure and more subtle injuries than earlier forms of the technology.
The research team is the first to confirm differences in the fibrous structure of the 10 tracts between healthy, full-term infant brains and those of premature babies. Although the imaging technology is regularly used in adults, the tiny head size and lack of benchmark measurements in healthy infants meant that the use of DTT in premature infants was previously uncharted territory. With the detailed technique developed by Dr. Parikh’s team, the images can now be reproducibly processed and reliably interpreted.
“This protocol opens the field to far greater use of the methodology for targeting and assessing therapies in these infants,” said Dr. Parikh, who also is an associate professor of pediatrics at The Ohio State University College of Medicine. “We already have studies underway using our DTT segmentation methodology to measure the effectiveness of early neuroprotective interventions, such as the use of breast milk or skin-to-skin contact while premature babies are in intensive care.”
As imaging technology continues to be refined, the goal of targeted therapies based on the specific region of the brain with a delay or injury will become reality, Dr. Parikh predicted.For example, if an infant’s DTT scan indicates an under-developed corticospinal tract — the region of the brain controlling motor ability — physicians could immediately begin proactive physical therapies with the baby instead of waiting until the delay manifests itself. A repeat DTT scan a few months after beginning the therapy could then detect whether the therapy is effectively improving the structure of that brain tract.
“Because cognitive and behavioral deficits cannot be diagnosed until school age, there is an urgent need for robust early prognostic biomarkers,” said Dr. Parikh. “Our work is an important step in this direction and will facilitate early testing of neuroprotective interventions.”
Researchers from Massachusetts Eye and Ear, Harvard Medical School, Massachusetts Institute of Technology and Massachusetts General Hospital have demonstrated, for the first time, that aspirin intake correlates with halted growth of vestibular schwannomas (also known as acoustic neuromas), a sometimes lethal intracranial tumor that typically causes hearing loss and tinnitus.

Image credit: Stanford School of Medicine/Oghalai Lab
Motivated by experiments in the Molecular Neurotology Laboratory at Mass. Eye and Ear involving human tumor specimens, the researchers performed a retrospective analysis of over 600 people diagnosed with vestibular schwannoma at Mass. Eye and Ear. Their research suggests the potential therapeutic role of aspirin in inhibiting tumor growth and motivates a clinical prospective study to assess efficacy of this well-tolerated anti-inflammatory medication in preventing growth of these intracranial tumors.
“Currently, there are no FDA-approved drug therapies to treat these tumors, which are the most common tumors of the cerebellopontine angle and the fourth most common intracranial tumors,” explains Konstantina Stankovic, M.D., Ph.D., who led the study. “Current options for management of growing vestibular schwannomas include surgery (via craniotomy) or radiation therapy, both of which are associated with potentially serious complications.”
The findings, which are described in the February issue of the journal Otology & Neurotology, were based on a retrospective series of 689 people, 347 of whom were followed with multiple magnetic resonance imaging MRI scans (50.3%). The main outcome measures were patient use of aspirin and rate of vestibular schwannoma growth measured by changes in the largest tumor dimension as noted on serial MRIs. A significant inverse association was found among aspirin users and vestibular schwannoma growth (odds ratio: 0.50, 95 percent confidence interval: 0.29-0.85), which was not confounded by age or gender.
“Our results suggest a potential therapeutic role of aspirin in inhibiting vestibular schwannoma growth,” said Dr. Stankovic, who is an otologic surgeon and researcher at Mass. Eye and Ear, Assistant Professor of Otology and Laryngology, Harvard Medical School (HMS), and member of the faculty of Harvard’s Program in Speech and Hearing Bioscience and Technology.
Neuroscientists from the University of Leicester, in collaboration with the Department of Neurosurgery at the University California Los Angeles (UCLA), are to reveal details of how the brain determines the timing at which neurons in specific areas fire to create new memories.
This research exploits the unique opportunity of recording multiple single-neurons in patients suffering from epilepsy refractory to medication that are implanted with intracranial electrodes for clinical reasons.

The study, which is to be published in the academic journal Current Biology, is the result of collaboration between Professor Rodrigo Quian Quiroga and Dr Hernan Rey at the Centre for Systems Neuroscience at the University of Leicester and Professor Itzhak Fried at UCLA.
The work follows up on the group’s research into what was dubbed the ‘Jennifer Aniston neurons’ – neurons in the hippocampus and its surrounding areas within the brain that specifically fire in an ‘abstract’ manner when we see or hear a certain concept - such as a person, an animal or a landscape - that we recognise.
Professor Quian Quiroga said: “The firing of these neurons is relatively very late after the moment of seeing the picture, or hearing the person’s name, but is still very precise. These neurons also fire only when the pictures are consciously recognised and remain silent when they are not.
“Our research shows that there is a specific brain response that marks the timing of the firing of these neurons. This response shortly precedes the neuron’s firing and is only present for the consciously recognised pictures - being absent if the pictures were not recognised.
“This brain response thus reflects an activation that provides a temporal window for processing consciously perceived stimuli in the hippocampus and surrounding cortex. Given the proposed role of these neurons in memory formation, we argue that the brain response we found is a gateway for processing consciously perceived stimuli to form or recall memories.”
Dr Hernan Rey, first author of the study, added: “This time-keeping may indeed be critical for synchronizing and combining multisensory information involving different processing times. This, in turn, helps in creating a unified conceptual representation that can be used for memory functions.”
Professor Quian Quiroga’s work is specifically concerned with examining how information about the external world - what we see, hear and touch - is represented by neurons in the brain and how this leads to the creation of our own internal representations and memories.
For example, we can easily recognize a person in a fraction of a second, even when seen from different angles, with different sizes, colours, contrasts and under strikingly different conditions. But how neurons in the brain are capable of creating such an ‘abstract’ representation, disregarding basic visual details, is only starting to be known.
Setting the stage for possible advances in pain treatment, researchers at The Johns Hopkins University and the University of Maryland report they have pinpointed two molecules involved in perpetuating chronic pain in mice. The molecules, they say, also appear to have a role in the phenomenon that causes uninjured areas of the body to be more sensitive to pain when an area nearby has been hurt. A summary of the research will be published on Jan. 23 in the journal Neuron.

Image caption: Nerves in mouse skin that are actively responding to the painful stimulus capsaicin have been genetically engineered to glow green. Hairs appear in yellow. Credit: David Rini
"With the identification of these molecules, we have some additional targets that we can try to block to decrease chronic pain," says Xinzhong Dong, Ph.D., associate professor of neuroscience at the Johns Hopkins University School of Medicine and an early career scientist at Howard Hughes Medical Institute. "We found that persistent pain doesn’t always originate in the brain, as some had believed, which is important information for designing less addictive drugs to fight it."
Chronic pain that persists for weeks, months or years after an underlying injury or condition is resolved afflicts an estimated 20 to 25 percent of the population worldwide and about 116 million people in the U.S., costing Americans a total of $600 billion in medical interventions and lost productivity. It can be caused by everything from nerve injuries and osteoarthritis to cancer and stress.
In their new research, the scientists focused on a system of pain-sensing nerves within the faces of mice, known collectively as the trigeminal nerve. The trigeminal nerve is a large bundle of tens of thousands of nerve cells. Each cell is a long “wire” with a hub at its center; the hubs are grouped together into a larger hub. On one side of this hub, three smaller bundles of wires — V1, V2 and V3 — branch off. Each bundle contains individual pain-sensing wires that split off to cover a specific territory of the face. Signals are sent through the wires to the hubs of the cells and then travel to the spinal cord through a separate set of bundles. From the spinal cord, the signals are relayed to the brain, which interprets them as pain.
When the researchers pinched the V2 branch of the trigeminal nerve for a prolonged period of time, they found that the V2 and V3 territories were extra sensitive to additional pain. This spreading of pain to uninjured areas is typical of those experiencing chronic pain, but it can also be experienced during acute injuries, as when a thumb is hit with a hammer and the whole hand throbs with pain.
To figure out why, the researchers studied pain-sensing nerves in the skin of mouse ears. The smaller branches of the trigeminal V3 reach up into the skin of the lower ear. But an entirely different set of nerves is responsible for the skin of the upper ear. This distinction allowed the researchers to compare the responses of two unrelated groups of nerves that are in close proximity to each other.
To overcome the difficulty of monitoring nerve responses, Dong’s team inserted a gene into the DNA of mice so that the primary sensory nerve cells would glow green when activated. The pain-sensing nerves of the face are a subset of these.
When skin patches were then bathed in a dose of capsaicin — the active ingredient in hot peppers — the pain-sensing nerves lit up in both regions of the ear. But the V3 nerves in the lower ear were much brighter than those of the upper ear. The researchers concluded that pinching the connected-but-separate V2 branch of the trigeminal nerve had somehow sensitized the V3 nerves to “overreact” to the same amount of stimulus. (Watch nerves light up in this video.)
Applying capsaicin again to different areas, the researchers found that more nerve branches coming from a pinched V2 nerve lit up than those coming from an uninjured one. This suggests that nerves that don’t normally respond to pain can modify themselves during prolonged injury, adding to the pain signals being sent to the brain.
Knowing from previous studies that the protein TRPV1 is needed to activate pain-sensing nerve cells, the researchers next looked at its activity in the trigeminal nerve. They showed it was hyperactive in injured V2 nerve branches and in uninjured V3 branches, as well as in the branches that extended beyond the hub of the trigeminal nerve cell and into the spinal cord.
Next, University of Maryland experts in the neurological signaling molecule serotonin, aware that serotonin is involved in chronic pain, investigated its role in the TRPV1 activation study. The team, led by Feng Wei, M.D., Ph.D., blocked the production of serotonin, which is released from the brain stem into the spinal cord, and found that TRPV1 hyperactivity nearly disappeared.
Says Dong: “Chronic pain seems to cause serotonin to be released by the brain into the spinal cord. There, it acts on the trigeminal nerve at large, making TRPV1 hyperactive throughout its branches, even causing some non-pain-sensing nerve cells to start responding to pain. Hyperactive TRPV1 causes the nerves to fire more frequently, sending additional pain signals to the brain.”
TAU researcher finds that adults still think about numbers like kids

Children understand numbers differently than adults. For kids, one and two seem much further apart then 101 and 102, because two is twice as big as one, and 102 is just a little bigger than 101. It’s only after years of schooling that we’re persuaded to see the numbers in both sets as only one integer apart on a number line.
Now Dror Dotan, a doctoral student at Tel Aviv University’s School of Education and Sagol School of Neuroscience and Prof. Stanislas Dehaene of the Collège de France, a leader in the field of numerical cognition, have found new evidence that educated adults retain traces of their childhood, or innate, number sense — and that it’s more powerful than many scientists think.
"We were surprised when we saw that people never completely stop thinking about numbers as they did when they were children," said Dotan. "The innate human number sense has an impact, even on thinking about double-digit numbers." The findings, a significant step forward in understanding how people process numbers, could contribute to the development of methods to more effectively educate or treat children with learning disabilities and people with brain injuries.
Digital proof of a primal sense
Educated adults understand numbers “linearly,” based on the familiar number line from 0 to infinity. But children and uneducated adults, like tribespeople in the Amazon, understand numbers “logarithmically” — in terms of what percentage one number is of another. To analyze how educated adults process numbers in real time, Dotan and Dehaene asked the participants in their study to place numbers on a number line displayed on an iPad using a finger.
Previous studies showed that people who understand numbers linearly perform the task differently than people who understand numbers logarithmically. For example, linear thinkers place the number 20 in the middle of a number line marked from 0 to 40. But logarithmic thinkers like children may place the number 6 in the middle of the number line, because 1 is about the same percentage of 6 as 6 is of 40.
On the iPad used in the study, the participants were shown a number line marked only with “0” on one end and “40” on the other. Numbers popped up one at a time at the top of the iPad screen, and the participants dragged a finger from the middle of the screen down to the place on the number line where they thought each number belonged. Software tracked the path the finger took.
Changing course
Statistical analysis of the results showed that the participants placed the numbers on the number line in a linear way, as expected. But surprisingly — for only a few hundred milliseconds — they appeared to be influenced by their innate number sense. In the case of 20, for example, the participants drifted slightly rightward with their finger — toward where 20 would belong in a ratio-based number line — and then quickly corrected course. The results provide some of the most direct evidence to date that the innate number sense remains active, even if largely dormant, in educated adults.
"It really looks like the two systems in the brain compete with each other," said Dotan.
Significantly, the drift effect was found with two-digit as well as one-digit numbers. Many researchers believe that people can only convert two-digit numbers into quantities using the learned linear numerical system, which processes the quantity of each digit separately — for example, 34 is processed as 3 tens plus 4 ones. But Dotan and Dehaene’s research showed that the innate number sense is, in fact, capable of handling the complexity of two-digit numbers as well.
Scientists have identified a channel present in many pain detecting sensory neurons that acts as a ‘brake’, limiting spontaneous pain. It is hoped that the new research, published today [22 January] in the Journal of Neuroscience, will ultimately contribute to new pain relief treatments.
Spontaneous pain is ongoing pathological pain that occurs constantly (slow burning pain) or intermittently (sharp shooting pain) without any obvious immediate cause or trigger. The slow burning pain is the cause of much suffering and debilitation. Because the mechanisms underlying this type of slow burning pain are poorly understood, it remains very difficult to treat effectively.
Spontaneous pain of peripheral origin is pathological, and is associated with many types of disease, inflammation or damage of tissues, organs or nerves (neuropathic pain). Examples of neuropathic pain are nerve injury/crush, post-operative pain, and painful diabetic neuropathy.
Previous research has shown that this spontaneous burning pain is caused by continuous activity in small sensory nerve fibers, known as C-fiber nociceptors (pain neurons). Greater activity translates into greater pain, but what causes or limits this activity remained poorly understood.
Now, new research from the University of Bristol, has identified a particular ion channel present exclusively in these C-fiber nociceptors This ion channel, known as TREK2, is present in the membranes of these neurons, and the researchers showed that it provides a natural innate protection against this pain.
Ion channels are specialised proteins that are selectively permeable to particular ions. They form pores through the neuronal membrane. Leak potassium channels are unusual, in that they are open most of the time allowing positive potassium ions (K+) to leak out of the cell. This K+ leakage is the main cause of the negative membrane potentials in all neurons. TREK2 is one of these leak potassium channels. Importantly, the C-nociceptors that express TREK2 have much more negative membrane potentials than those that do not.
Researchers showed that when TREK2 was removed from the proximity of the cell membrane, the potential in those neurons became less negative. In addition, when the neuron was prevented from synthesizing the TREK2, the membrane potential also became less negative.
They also found that spontaneous pain associated with skin inflammation, was increased by reducing the levels of synthesis of TREK2 in these C-fiber neurons.
They concluded that in these C-fiber nociceptors the TREK2 keeps membrane potentials more negative, stabilizing their membrane potential, reducing firing and thus limiting the amount of spontaneous burning pain.
Professor Sally Lawson, from the School of Physiology and Pharmacology at Bristol University, explained: “It became evident that TREK2 kept the C-fiber nociceptor membrane at a more negative potential. Despite the difficulties inherent in the study of spontaneous pain, and the lack of any drugs that can selectively block or activate TREK2, we demonstrated that TREK2 in C-fiber nociceptors is important for stabilizing their membrane potential and decreasing the likelihood of firing. It became apparent that TREK2 was thus likely to act as a natural innate protection against pain. Our data supported this, indicating that in chronic pain states, TREK2 is acting as a brake on the level of spontaneous pain.”
Dr Cristian Acosta, the first author on the paper and now working at the Institute of Histology and Embriology of Mendoza in Argentina, said “Given the role of TREK2 in protecting against spontaneous pain, it is important to advance our understanding of the regulatory mechanisms controlling its expression and trafficking in these C-fiber nociceptors. We hope that this research will enable development of methods of enhancing the actions of TREK2 that could potentially some years hence provide relief for sufferers of ongoing spontaneous burning pain.”
Using a simple study of eye movements, Johns Hopkins scientists report evidence that people who are less patient tend to move their eyes with greater speed. The findings, the researchers say, suggest that the weight people give to the passage of time may be a trait consistently used throughout their brains, affecting the speed with which they make movements, as well as the way they make certain decisions.

Caption: Despite claims to the contrary, the eyes of the Mona Lisa do not make saccades. Credit: Leonardo da Vinci
In a summary of the research to be published Jan. 21 in The Journal of Neuroscience, the investigators note that a better understanding of how the human brain evaluates time when making decisions might also shed light on why malfunctions in certain areas of the brain make decision-making harder for those with neurological disorders like schizophrenia, or for those who have experienced brain injuries.
Principal investigator Reza Shadmehr, Ph.D., professor of biomedical engineering and neuroscience at The Johns Hopkins University, and his team set out to understand why some people are willing to wait and others aren’t. “When I go to the pharmacy and see a long line, how do I decide how long I’m willing to stand there?” he asks. “Are those who walk away and never enter the line also the ones who tend to talk fast and walk fast, perhaps because of the way they value time in relation to rewards?”
To address the question, the Shadmehr team used very simple eye movements, known as saccades, to stand in for other bodily movements. Saccades are the motions that our eyes make as we focus on one thing and then another. “They are probably the fastest movements of the body,” says Shadmehr. “They occur in just milliseconds.” Human saccades are fastest when we are teenagers and slow down as we age, he adds.
In earlier work, using a mathematical theory, Shadmehr and colleagues had shown that, in principle, the speed at which people move could be a reflection of the way the brain calculates the passage of time to reduce the value of a reward. In the current study, the team wanted to test the idea that differences in how subjects moved were a reflection of differences in how they evaluated time and reward.
For the study, the team first asked healthy volunteers to look at a screen upon which dots would appear one at a time –– first on one side of the screen, then on the other, then back again. A camera recorded their saccades as they looked from one dot to the other. The researchers found a lot of variability in saccade speed among individuals but very little variation within individuals, even when tested at different times and on different days. Shadmehr and his team concluded that saccade speed appears to be an attribute that varies from person to person. “Some people simply make fast saccades,” he says.
To determine whether saccade speed correlated with decision-making and impulsivity, the volunteers were told to watch the screen again. This time, they were given visual commands to look to the right or to the left. When they responded incorrectly, a buzzer sounded.
After becoming accustomed to that part of the test, they were forewarned that during the following round of testing, if they followed the command right away, they would be wrong 25 percent of the time. In those instances, after an undetermined amount of time, the first command would be replaced by a second command to look in the opposite direction.
To pinpoint exactly how long each volunteer was willing to wait to improve his or her accuracy on that phase of the test, the researchers modified the length of time between the two commands based on a volunteer’s previous decision. For example, if a volunteer chose to wait until the second command, the researchers increased the time they had to wait each consecutive time until they determined the maximum time the volunteer was willing to wait — only 1.5 seconds for the most patient volunteer. If a volunteer chose to act immediately, the researchers decreased the wait time to find the minimum time the volunteer was willing to wait to improve his or her accuracy.
When the speed of the volunteers’ saccades was compared to their impulsivity during the patience test, there was a strong correlation. “It seems that people who make quick movements, at least eye movements, tend to be less willing to wait,” says Shadmehr. “Our hypothesis is that there may be a fundamental link between the way the nervous system evaluates time and reward in controlling movements and in making decisions. After all, the decision to move is motivated by a desire to improve one’s situation, which is a strong motivating factor in more complex decision-making, too.”
A new brain-imaging technique enables people to ‘watch’ their own brain activity in real time and to control or adjust function in pre-determined brain regions. The study from the Montreal Neurological Institute and Hospital – The Neuro, McGill University and the McGill University Health Centre, published in NeuroImage, is the first to demonstrate that magnetoencephalography (MEG) can be used as a potential therapeutic tool to control and train specific targeted brain regions. This advanced brain-imaging technology has important clinical applications for numerous neurological and neuropsychiatric conditions.

MEG is a non-invasive imaging technology that measures magnetic fields generated by nerve cell circuits in the brain. MEG captures these tiny magnetic fields with remarkable accuracy and has unrivaled time resolution - a millisecond time scale across the entire brain. “This means you can observe your own brain activity as it happens,” says Dr. Sylvain Baillet, acting Director of the Brain Imaging Centre at The Neuro and lead investigator on the study. “We can use MEG for neurofeedback – a process by which people can see on-going physiological information that they aren’t usually aware of, in this case, their own brain activity, and use that information to train themselves to self-regulate. Our ultimate hope and aim is to enable patients to train specific regions of their own brain, in a way that relates to their particular condition. For example neurofeedback can be used by people with epilepsy so that they could train to modify brain activity in order to avoid a seizure.”
In this proof of concept study, participants had nine sessions in the MEG and used neurofeedback to reach a specific target. The target was to look at a coloured disc on a display screen and find their own strategy to change the disc’s colour from dark red to bright yellow white, and to maintain that bright colour for as long as possible. The disc colour was indexed on a very specific aspect of their ongoing brain activity: the researchers had set it up so that the experiment was accessing predefined regions of the motor cortex in the participants’ brain. The colour presented was changing according to a predefined combination of slow and faster brain activity within these regions. This was possible because the researchers combined MEG with MRI, which provides information on the brain’s structures, known as magnetic source imaging (MSI).
“The remarkable thing is that with each training session, the participants were able to reach the target aim faster, even though we were raising the bar for the target objective in each session, the way you raise the bar each time in a high jump competition. These results showed that participants were successfully using neurofeedback to alter their pattern of brain activity according to a predefined objective in specific regions of their brain’s motor cortex, without moving any body part. This demonstrates that MEG source imaging can provide brain region-specific real time neurofeedback and that longitudinal neurofeedback training is possible with this technique.”
These findings pave the way for MEG as an innovative therapeutic approach for treating patients. To date, work with epilepsy patients has shown the most promise but there is great potential to use MEG to investigate other neurological syndromes and neuropsychiatric disorders (e.g., stroke, dementia, movement disorders, chronic depression, etc). MEG has potential to reveal dynamics of brain activity involved in perception, cognition and behaviour: it has provided unique insight on brain functions (language, motor control, visual and auditory perception, etc.) and dysfunctions (movement disorders, tinnitus, chronic pain, dementia, etc.).
Dr. Baillet and his team are collaborating presently with Prof. Isabelle Peretz at Université de Montréal to use this technique with people that have amusia, a disorder that makes them unable to process musical pitch. It is hypothesized that amusia results from poor connectivity between the auditory cortex and prefrontal regions in the brain. In an ongoing study, the team is measuring the intensity of functional connectivity between these brain regions in amusic patients and aged-matched healthy controls. Using MEG-neurofeedback, they hope to take advantage of the brain’s plasticity to reinforce the functional connectivity between the target brain regions. If the approach demonstrates an improvement in pitch discrimination in participants, that will demonstrate the clinical and rehabilitative applications of this approach. The baseline measurements have been taken already, and the training sessions will take place over this year.
Scientists from the Montreal Neurological Institute and Hospital in Canada have discovered that two genes linked to hereditary Parkinson’s disease are involved in the early-stage quality control of mitochondria. The protective mechanism, which is reported in The EMBO Journal, removes damaged proteins that arise from oxidative stress from mitochondria.
“PINK1 and parkin, are implicated in selectively targeting dysfunctional components of mitochondria to the lysosome under conditions of excessive oxidative damage within the organelle,” said Edward Fon, Professor at the McGill Parkinson Program at the Montreal Neurological Institute and Hospital. “Our study reveals a quality control mechanism where vesicles bud off from mitochondria and proceed to the lysosome for degradation. This method is distinct from the degradation pathway for damaged whole mitochondria which has been known for some time. It is also an early response, proceeding on a timescale of hours instead of days.”
The deterioration of mechanisms designed to maintain the integrity and function of mitochondria throughout the lifetime of a cell has been suggested to underlie the progression of several neurodegenerative diseases, including Parkinson’s disease. When mitochondria, the “power plants” of the cell that provide energy, malfunction they can contribute to Parkinson’s disease. If they are to survive and function mitochondria need to degrade oxidized and damaged proteins.
In the study, immunofluorescence and confocal microscopy were used to observe how the vesicles “pinch off” from mitochondria with their damaged cargo. “Our conclusion is that the loss of this PINK1 and parkin-dependent trafficking system impairs the ability of mitochondria to selectively degrade oxidized and damaged proteins and leads, over time, to the mitochondrial dysfunction noted in hereditary Parkinson’s disease,” said Heidi McBride, Professor in the Neuromuscular Group in the Department of Neurology and Neurosurgery at the Montreal Neurological Institute and Hospital.
Both salvage pathways are operational in the cell. If the vesicular pathway, the first line of defense, is overwhelmed and the damage is irreversible then the entire organelle is targeted for degradation.
Cleveland Clinic researchers have identified a protein in the brain that plays a critical role in the memory loss seen in Alzheimer’s patients, according to a study to be published in the journal Nature Neuroscience and posted online today.
The protein – Neuroligin-1 (NLGN1) – is known to be involved in memory formation; this is the first time it’s been linked to amyloid-associated memory loss.
In Alzheimer’s disease, amyloid beta proteins accumulate in the brains of Alzheimer’s patients and induce inflammation. This inflammation leads to certain gene modifications that interrupt the functioning of synapses in the brain, leading to memory loss.
Using animal models, Cleveland Clinic researchers have discovered that during this neuroinflammatory process, the epigenetic modification of NLGN1 disrupts the synaptic network in the brain, which is responsible for developing and maintaining memories. Destroying this network can lead to the type of memory loss seen in Alzheimer’s patients.
"Alzheimer’s is a challenging disease that researchers have been approaching from all angles," said Mohamed Naguib, M.D., the Cleveland Clinic physician who lead the study. "This discovery could provide us with a new approach for preventing and treating Alzheimer’s disease."
Previous studies from this group of researchers have also identified a novel compound called MDA7, which can potentially stop the neuroinflammatory process that leads to the modification of NLGN1. Treatment with the compound restored cognition, memory and synaptic plasticity – a key neurological foundation of learning and memory – in an animal model. Significant preliminary work for the first-in-man study has been completed for MDA7 including in-vitro studies and preliminary clinical toxicology and pharmacokinetic work. The Cleveland Clinic plans to initiate Phase I human studies on the safety of this class of compounds in the near future.
Alzheimer’s disease is an irreversible, fatal brain disease that slowly destroys memory and thinking skills. About 5 million people in the United States have Alzheimer’s disease. With the aging of the population, and without successful treatment, there will be 16 million Americans and 106 million people worldwide with Alzheimer’s by 2050, according to the 2011 Alzheimer’s Disease Facts and Figures report from the Alzheimer’s Association.