While the search continues for the Fountain of Youth, researchers may have found the body’s “fountain of aging”: the brain region known as the hypothalamus. For the first time, scientists at Albert Einstein College of Medicine of Yeshiva University report that the hypothalamus of mice controls aging throughout the body. Their discovery of a specific age-related signaling pathway opens up new strategies for combating diseases of old age and extending lifespan. The paper was published today in the online edition of Nature.

“Scientists have long wondered whether aging occurs independently in the body’s various tissues or if it could be actively regulated by an organ in the body,” said senior author Dongsheng Cai, M.D., Ph.D., professor of molecular pharmacology at Einstein. “It’s clear from our study that many aspects of aging are controlled by the hypothalamus. What’s exciting is that it’s possible — at least in mice — to alter signaling within the hypothalamus to slow down the aging process and increase longevity.”
The hypothalamus, an almond-sized structure located deep within the brain, is known to have fundamental roles in growth, development, reproduction, and metabolism. Dr. Cai suspected that the hypothalamus might also play a key role in aging through the influence it exerts throughout the body.
“As people age,” he said, “you can detect inflammatory changes in various tissues. Inflammation is also involved in various age-related diseases, such as metabolic syndrome, cardiovascular disease, neurological disease and many types of cancer.” Over the past several years, Dr. Cai and his research colleagues showed that inflammatory changes in the hypothalamus can give rise to various components of metabolic syndrome (a combination of health problems that can lead to heart disease and diabetes).
To find out how the hypothalamus might affect aging, Dr. Cai decided to study hypothalamic inflammation by focusing on a protein complex called NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells). “Inflammation involves hundreds of molecules, and NF-κB sits right at the center of that regulatory map,” he said.
In the current study, Dr. Cai and his team demonstrated that activating the NF-κB pathway in the hypothalamus of mice significantly accelerated the development of aging, as shown by various physiological, cognitive, and behavioral tests. “The mice showed a decrease in muscle strength and size, in skin thickness, and in their ability to learn — all indicators of aging. Activating this pathway promoted systemic aging that shortened the lifespan,” he said.
Conversely, Dr. Cai and his group found that blocking the NF-κB pathway in the hypothalamus of mouse brains slowed aging and increased median longevity by about 20 percent, compared to controls.
The researchers also found that activating the NF-κB pathway in the hypothalamus caused declines in levels of gonadotropin-releasing hormone (GnRH), which is synthesized in the hypothalamus. Release of GnRH into the blood is usually associated with reproduction. Suspecting that reduced release of GnRH from the brain might contribute to whole-body aging, the researchers injected the hormone into a hypothalamic ventricle (chamber) of aged mice and made the striking observation that the hormone injections protected them from the impaired neurogenesis (the creation of new neurons in the brain) associated with aging. When aged mice received daily GnRH injections for a prolonged period, this therapy exerted benefits that included the slowing of age-related cognitive decline, probably the result of neurogenesis.
According to Dr. Cai, preventing the hypothalamus from causing inflammation and increasing neurogenesis via GnRH therapy are two potential strategies for increasing lifespan and treating age-related diseases. This technology is available for licensing.
A small group of elusive neurons in the brain’s cortex play a big role in ALS (amyotrophic lateral sclerosis), a swift and fatal neurodegenerative disease that paralyzes its victims. But the neurons have always been difficult to study because there are so few of them and they look so similar to other neurons in the cortex.
In a new preclinical study, a Northwestern Medicine® scientist has isolated the motor neurons in the brain that die in ALS and, for the first time, dressed them in a green fluorescent jacket. Now they’re impossible to miss and easy to study.
The cells slide on neon jackets when they are born and continue to wear them as they age and become sick. As a result, scientists will now be able to track what goes wrong in these cells to cause their deaths and be able to search for effective treatments.
"We have developed the tool to investigate what makes these cells become vulnerable and sick," said Hande Ozdinler, senior author of the study and assistant professor of neurology at Northwestern University Feinberg School of Medicine. "This was not possible before."
Ozdinler and colleagues also identified the motor neurons that don’t die, enabling scientists to study what protects them.
The study will be published in the Journal of Neuroscience on May 1.
ALS, also known as Lou Gehrig’s disease, causes the death of muscle-controlling nerve cells in the brain and spinal cord (motor neurons). It results in rapidly progressing paralysis and death usually within three to five years of the onset of symptoms.
There are about 75,000 upper motor neurons affected in ALS out of some 2 billion cells in the brain. Previously, the only way to study the upper motor neurons was to extract them through surgery, a difficult process that was beyond the scope of most scientists and still didn’t allow examination of the ailing neurons at various stages of the disease.
"You couldn’t study them at the cellular level, so the research field ignored them," Ozdinler said. She is one of the few scientists in the country who studies cortical motor neurons. Most of ALS research has focused on the death of motor neurons in the spinal cord.
Key puzzle piece: Why ALS moves so swiftly
But the brain’s motor neurons are a key piece of the ALS puzzle. Their disintegration explains why the disease advances more swiftly than other neurodegenerative diseases. It had previously been thought that the spinal motor neurons died first and their demise led to the secondary death of the brain’s motor neurons. But Ozdinler’s recent research showed that the motor neurons in the brain and spinal cord die simultaneously.
"The whole system collapses at once," Ozdinler said. "It’s degeneration from both ends which is why the disease moves so swiftly."
Every voluntary movement is initiated and modulated by upper motor neurons — answering a cell phone, typing an email, walking to the store. The upper motor neurons tell the spinal motor neurons what to do. In ALS, both the directing neurons and the neurons that create the movement disintegrate at the same time.
Finding the light that never goes out
Ozdinler spent the last four years figuring out how to permanently sheath cortical motor neurons in fluorescence.
Although scientists can flag spinal cord motor neurons in fluorescence, it wears off as the neuron ages because the process uses an embryonic gene. Ozdinler wanted a longer lasting effect so scientists could study the neuron as it ages and develops ALS. She sorted through 6,000 upper motor neuron genes that are vulnerable to ALS before she found one — UCHL1 — that is expressed through adulthood.
She used that gene — which had been cloned with the fluorescence molecule — and created a mouse model whose upper motor neurons shimmer in green. Then she mated that mouse with an ALS transgenic mouse model. The result is a mouse with fluorescent diseased motor neurons in the brain.
"Now we have a model of one motor neuron population that dies and one that is resistant," Ozdinler said. "That’s the perfect experiment. You can ask what does this neuron have that makes it resistant and what does the other one have that makes it vulnerable? That’s what we will find out."
Marina Yasvoina, a graduate student, and Baris Genc, a postdoctoral fellow, both in Ozdinler’s lab, are the lead authors of the paper. Ozdinler collaborated with Gordon Shepherd, associate professor of physiology, and C.J. Heckman, professor in physiology, both at Feinberg.
"This work was possible thanks to the collaborative nature of Northwestern," Ozdinler said.
“This study represents a fusion of the leadership and neuroscience fields, and this fusion can revolutionize approaches to assessing and developing leaders,” says Hannah, the Tylee Wilson Chair in business ethics and professor of management at the Wake Forest University School of Business. Hannah is lead author of the paper in the May 2013 Journal of Applied Psychology titled, “The Psychological and Neurological Bases of Leader Self-Complexity and Effects on Adaptive Decision-Making.”
Hannah and four colleagues tested 103 young military leaders between the ranks of officer cadet and major at a U.S. Army base on the east coast. They administered psychological exams to assess the complexity of leaders’ identities, and neurological exams to assess the complexity of soldiers’ brain activity. For the brain tests, the researchers attached quantitative electroencephalogram (qEEG) electrodes to 19 areas of the soldier’s scalp.
Hannah and his fellow researchers wanted to know if great leaders had more complex brains – measured by the electrodes which reported which parts of the brain were firing together at the same time. A low complex brain shows more areas of the brain operating at the same time at the same electrical amplitude and frequency – which suggests those areas converge to process the same task leaving fewer brain resources for other tasks and processes. It’s a process called “phase lock.”
But in high complex brains, the activity patterns are much more different and varied – which suggests more of the brains resources are available at any one time to handle other situations or tasks.
“Think of it as a single core versus a multicore computer’s central processing unit (CPU),” Hannah says. “A multicore CPU can multitask because one core can process a task while the other CPU cores remain free to process new tasks. More complex brains are also more efficient in locking together only the brain resources needed to process a task and then efficiently releasing them when no longer needed.”
The study showed the high complex brains of the great leaders had a different “landscape.” The scans showed more differentiated activation patterns in the frontal and prefrontal lobes of leaders who demonstrated greater decisiveness, adaptive thinking and positive action orientation in the experiment.
“Further, individuals who have developed richer and more elaborate self-concepts as leaders were found to be more complex and adaptable,” Hannah says. “These findings have important implications for identifying and developing leaders who can lead effectively in today’s changing, dynamic, and often volatile organizational contexts.”
The researcher team suggests that once they validate neurological profiles of leaders with high complex brains, they will be able to use established techniques like neuro-feedback to enhance these leadership skills in others. Neuro-feedback has been successfully used with elite athletes, concert musicians and financial traders in their training. These profiles can also be used to assess leaders and track their development over time.
These findings have relevance to the WFU Schools of Business’ new student development framework, which focuses on developing practical wisdom, strategic thinking and critical thinking skills, along with the ability to embrace complexity and ambiguity.
Hannah’s co-authors include Pierre Balthazard, dean of the School of Business at Saint Bonaventure University; David A. Waldman, professor of business at Arizona State University; Peter L. Jennings, of the Center for the Army Profession and Ethic at West Point; and Robert W. Thatcher of the University of South Florida.
This research team is at the forefront of applying neuroscience to study effective leadership. The team previously published a 2012 paper in the Leadership Quarterly, which identified unique brain functioning in leaders who are seen by their followers as highly inspirational and charismatic.
Congenital amusia is a disorder characterized by impaired musical skills, which can extend to an inability to recognize very familiar tunes. The neural bases of this deficit are now being deciphered. According to a study conducted by researchers from CNRS and Inserm at the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1), amusics exhibit altered processing of musical information in two regions of the brain: the auditory cortex and the frontal cortex, particularly in the right cerebral hemisphere. These alterations seem to be linked to anatomical anomalies in these same cortices. This work, published in May in the journal Brain, adds invaluable information to our understanding of amusia and, more generally, of the “musical brain”, in other words the cerebral networks involved in the processing of music.

Congenital amusia, which affects between 2 and 4% of the population, can manifest itself in various ways: by difficulty in hearing a “wrong note”, by singing “out of tune” and sometimes by an aversion to music. For some of these individuals, music is like a foreign language or a simple noise. Amusia is not due to any auditory or psychological problem and does not seem to be linked to other neurological disorders. Research on the neural bases of this impairment only began a decade ago with the work of the Canadian neuropsychologist Isabelle Peretz.
Two teams from the Centre de Recherche en Neurosciences de Lyon (CNRS / Inserm / Université Claude Bernard Lyon 1) have studied the encoding of musical information and the short-term memorization of notes. According to previous work, amusical individuals experience particular difficulty in hearing the pitch of notes (low or high) and, although they remember sequences of words normally, they have difficulty in memorizing sequences of notes.
In a bid to determine the regions of the brain concerned with these memorization difficulties, the researchers conducted magneto-encephalographs (a technique that allows very weak magnetic fields produced by neural activity to be measured at the surface of the head) on a group of amusics while they were performing a musical task. The task consisted in listening to two tunes separated by a two-second gap. The volunteers were asked to determine whether the tunes were identical or different.
The scientists observed that, when hearing and memorizing notes, amusics exhibited altered sound processing in two regions of the brain: the auditory cortex and the frontal cortex, essentially in the right hemisphere. Compared to non-amusics, their neural activity was delayed and impaired in these specific areas when encoding musical notes. These anomalies occurred 100 milliseconds after the start of a note.
These results agree with an anatomical observation that the researchers have confirmed using MRI: amusical individuals have an excess of grey matter in the inferior frontal cortex, accompanied by a deficit in white matter, one of whose essential constituents is myelin. This surrounds and protects the axons of the neurons, helping nerve signals to propagate rapidly. The researchers also observed anatomical anomalies in the auditory cortex. This data lends weight to the hypothesis according to which amusia could be due to insufficient communication between the auditory cortex and the frontal cortex.
Amusia thus stems from impaired neural processing from the very first steps of sound processing in the auditory nervous system. This work makes it possible to envisage a program to remedy these musical difficulties, by targeting the early steps of the processing of sounds and their memorization.
Scientists have identified a gene that keeps our nerve fibers from clogging up. Researchers in Ken Miller’s laboratory at the Oklahoma Medical Research Foundation (OMRF) found that the unc-16 gene of the roundworm Caenorhabditis elegans encodes a gatekeeper that restricts flow of cellular organelles from the cell body to the axon, a long, narrow extension that neurons use for signaling. Organelles clogging the axon could interfere with neuronal signaling or cause the axon to degenerate, leading to neurodegenerative disorders. This research, published in the May 2013 Genetics Society of America’s journal GENETICS, adds an unexpected twist to our understanding of trafficking within neurons.
Proteins equivalent to UNC-16 are present in the neurons of all animals, including humans And are known to interact with proteins associated with neurodegenerative disorders in humans (Hereditary Spastic Paraplegia) and mice (Legs at Odd Angles). However, the underlying cause of these disorders is not well understood.
"Our UNC-16 study provides the first insights into a previously unrecognized trafficking system that protects axons from invasion by organelles from the cell soma," Dr. Miller said. "A breakdown in this gatekeeper may be the underlying cause of this group of disorders," he added.
The use of the model organism C. elegans, a tiny, translucent roundworm with only 300 neurons, enabled the discovery because the researchers were able to apply complex genetic techniques and imaging methods in living organisms, which would be impossible in larger animals. Dr. Miller’s team tagged organelles with fluorescent proteins and then used time-lapse imaging to follow the movements of the organelles. In normal axons, organelles exited the cell body and entered the initial segment of the axon, but did not move beyond that. In axons of unc-16 mutants, the organelles hitched a ride on tiny motors that carried them deep into the axon, where they accumulated.
Dr. Miller acknowledges there are still a lot of unanswered questions. His lab is currently investigating how UNC-16 performs its crucial gatekeeper function by looking for other mutant worms with similar phenotypes. A Commentary on the article, also published in this issue of GENETICS, calls the work “provocative”, and highlights several important questions prompted by this pioneering study.
"This research once again shows how studies of simple model organisms can bring insight into complex neurodegenerative diseases in humans," said Mark Johnston, Editor-in-Chief of the journal GENETICS. “This kind of basic research is necessary if we are to understand diseases that can’t easily be studied in more complex animals.”
It sounds like science fiction, but researchers are gaining ground in developing mind-controlled robotic arms that could give people with paralysis or amputated limbs more independence.

The technology, known as brain-computer (or brain-machine) interface, is in its infancy as far as human use — though scientists have been studying the concept for years. But experts say that people with paralysis or amputations could be using the technology at home within the next decade.
It basically boils down to people using their thoughts to control a robot arm that then performs a desired task, like grasping and moving a cup. That’s done via tiny electrode “grids” implanted in the brain that read the movement signals firing from individual nerve cells, then translate them to the robot arm.
"We have the ability to capture information from the brain and use it to control the robotic arm," said Dr. Elizabeth Tyler-Kabara, who presented her team’s latest findings on the technology Tuesday, at the annual meeting of the American Association of Neurological Surgeons, in New Orleans.
However, she stressed, “we still have a ton to learn.”
Right now, the robot arm is confined to the lab. After getting their electrodes implanted, study patients come to the lab to work with the robotic limb under the researchers’ supervision. So far, Tyler-Kabara and her colleagues at the University of Pittsburgh School of Medicine have tested the approach in one patient. Researchers at Brown University in Providence, R.I., have done it in a handful of others.
One of the big questions, Tyler-Kabara said, is “how much control is enough?” That is, how well does the mind-controlled arm need to work to bring real everyday benefits to people?
At the meeting on Tuesday, Tyler-Kabara presented an update on how her team’s patient is faring. The 53-year-old woman had long-standing quadriplegia due to a disease called spinocerebellar degeneration — where, for unknown reasons, the connections between the brain and muscles slowly deteriorate.
Tyler-Kabara performed the surgery, where two tiny electrode grids were placed in the area of the brain that would normally control the movement of the right hand and arm. The electrode points penetrate the brain’s surface by about one-sixteenth of an inch.
"The idea is pretty scary," Tyler-Kabara acknowledged. But her team’s patient had no complications from the surgery and left the hospital the next day. There’ve been no longer-term problems either, she said — though, in theory, there would be concerns about infection or bleeding over the long haul.
The surgery left the patient with two terminals that protrude through her skull. The researchers used those to connect the implanted electrodes to a computer, where they could see brain cells firing when the patient thought about moving her hand.
She was quickly able to master simple movements with the robotic arm, like high-fiving the researchers. And after six months, she was performing “10-degrees-of-freedom” movements, Tyler-Kabara reported at the meeting.
That includes not only moving the arm, but also flexing and rotating the wrist, grasping objects and affecting several different hand “postures.” She has accomplished feats like feeding herself chocolate.
The researchers initially used a computer in training sessions with the patient, but after that the robot arm is directly linked to the electrodes — so there is no need for “computer assistance,” according to Tyler-Kabara.
Still, before the technology can ultimately be used at home, she said, researchers have to devise a “fully implanted” wireless system for controlling the robot arm.
Another expert talked about the new technology.
"This is one more encouraging step toward developing something practical that people can use in their daily lives," said Dr. Robert Grossman, a neurosurgeon at Methodist Neurological Institute in Houston, who was not involved in the research.
It’s hard to put a time line on it all, Grossman said, since technological advances could changes things. He also noted that several research groups are looking at different approaches to brain-computer interfaces.
One, Grossman said, is to do it noninvasively, through electrodes placed on the scalp.
Study author Tyler-Kabara said that noninvasive approach has met with success in helping people perform simple tasks, like moving a cursor on a computer screen. “But I don’t think it will ever be good enough for performing complicated tasks,” she said, noting that it can’t work as precisely as the implanted electrodes.
A next step, Tyler-Kabara said, is to develop a “two-way” electrode system that stimulates the brain to generate sensation — with the aim of helping people adjust the robot’s grip strength.
She said there is also much to learn about which people will ultimately be good candidates for the technology. There may, for example, be some brain injuries that prevent people from benefiting.
Because this study was presented at a medical meeting, the data and conclusions should be viewed as preliminary until published in a peer-reviewed journal.
Most infants respond to a game of peek-a-boo with smiles at the very least, and, for those who find the activity particularly entertaining, gales of laughter. For infants with autism spectrum disorders (ASD), however, the game can be distressing rather than pleasant, and they’ll do their best to tune out all aspects of it –– and that includes the people playing with them.

That disengagement is a hallmark of ASD, and one of the characteristics that amplifies the disorder as infants develop into children and then adults.
A study conducted by researchers at the Koegel Autism Center at UC Santa Barbara has found that replacing such games in favor of those the infant prefers can actually lessen the severity of the infants’ ASD symptoms, and, perhaps, alleviate the condition altogether. Their work is highlighted the current issue of the Journal of Positive Behavior Interventions.
Lynn Koegel, clinical director of the center and the study’s lead author, described the game-playing protocol as a modified Pivotal Response Treatment (PVT). Developed at UCSB, PRT is based on principles of positive motivation. The researchers identified the activities that seemed to be more enjoyable to the infants and taught the respective parents to focus on those rather than on the typical games they might otherwise choose. “We had them play with their infants for short periods, and then give them some kind of social reward,” Koegel said. “Over time, we conditioned the infants to enjoy all the activities that were presented by pairing the less desired activities with the highly desired ones.” The social reward is preferable to, say, a toy, Koegel noted, because it maintains the ever-crucial personal interaction.
"The idea is to get them more interested in people," she continued, "to focus on their socialization. If they’re avoiding people and avoiding interacting, that creates a whole host of other issues. They don’t form friendships, and then they don’t get the social feedback that comes from interacting with friends."
According to Koegel, by the end of the relatively short one- to three-month intervention period, which included teaching the parents how to implement the procedures, all the infants in the study had normal reactions to stimuli. “Two of the three have no disabilities at all, and the third is very social,” she said. “The third does have a language delay, but that’s more manageable than some of the other issues.”
On a large scale, Koegel hopes to establish some benchmark for identifying social deficits in infants so parents and health care providers can intervene sooner rather than later. “We have a grant from the Autism Science Foundation to look at lots of babies and try to really figure out which signs are red flags, and which aren’t,” she said. “A number of the infants who show signs of autism will turn out to be perfectly fine; but we’re saying, let’s not take the risk if we can put an intervention in play that really works. Then we don’t have to worry about whether or not these kids would develop the full-blown symptoms of autism.”
Historically, ASD is diagnosed in children 18 months or older, and treatment generally begins around 4 years. “You can pretty reliably diagnose kids at 18 months, especially the more severe cases,” said Koegel. “The mild cases might be a little harder, especially if the child has some verbal communication. There are a few measures –– like the ones we used in our study –– that can diagnose kids pre-language, even as young as six months. But ours was the first that worked with children under 12 months and found an effective intervention.”
Given the increasing number of children being diagnosed with ASD, Koegel’s findings could be life altering –– literally. “When you consider that the recommended intervention for preschoolers with autism is 30 to 40 hours per week of one-on-one therapy, this is a fairly easy fix,” she said. “We did a single one-hour session per week for four to 12 weeks until the symptoms improved, and some of these infants were only a few months old. We saw a lot of positive change.”
Why do some children learn math more easily than others? Research from the Stanford University School of Medicine has yielded an unexpected new answer.
In a study of third-graders’ responses to math tutoring, Stanford scientists found that the size and wiring of specific brain structures predicted how much an individual child would benefit from math tutoring. However, traditional intelligence measures, such as children’s IQs and their scores on tests of mathematical ability, did not predict improvements from tutoring.

The research is the first to use brain scans to look for a link between math-learning abilities and brain structure or function, and also the first to compare neural and cognitive predictors of kids’ responses to tutoring. In addition, it provides information on the differences between how children and adults learn math, and could help researchers understand the origins of math-learning disabilities.
The study was published online April 29 in Proceedings of the National Academy of Sciences.
"What was really surprising was that intrinsic brain measures can predict change - we can actually predict how much a child is going to learn during eight weeks of math tutoring based on measures of brain structure and connectivity," said Vinod Menon, PhD, the study’s senior author and a professor of psychiatry and behavioral sciences. Menon is also a member of the Child Health Research Institute at Lucile Packard Children’s Hospital.
"The results are a significant step toward the development of targeted learning programs based on a child’s current as well as predicted learning trajectory," said the study’s lead author, Kaustubh Supekar, PhD, postdoctoral scholar in psychiatry and behavioral sciences.
Menon’s team focused on third-grade students ages 8 and 9 because these children are at a critical stage for acquiring basic arithmetic skills. The study included 24 third-graders who participated in a well-validated program of 15 to 20 hours of individualized math tutoring over eight weeks. The tutors explained new concepts to children and also got them to practice math skills with an emphasis on speed, and the sessions were tailored to each child’s level of understanding.
Before tutoring began, the children were given several standard neuropsychological assessments, including tests of IQ, working memory, reading and math-problem-solving abilities. Both before and after the eight-week tutoring period, children’s arithmetic performance was tested, and all children had structural and functional magnetic resonance imaging scans performed on their brains. To control for the effects of math instruction the children received at school (rather than during tutoring), a comparison group of 16 third-grade children who did not receive tutoring, but who had the same testing and brain scans before and after an eight-week interval, was also included in the study.
All 24 children receiving tutoring improved their arithmetic performance. Their performance efficiency, a composite measure of accuracy and speed of problem solving, improved an average of 67 percent after tutoring. But individual gains varied widely, ranging from 8 percent to 198 percent improvement. The children who did not receive tutoring did not show any change in arithmetic performance during the study.
When the researchers analyzed the children’s structural brain scans, they found that larger gray matter volume in three brain structures predicted greater ability to benefit from math tutoring. (The predictions were generated with a machine learning algorithm, the same type of data-analysis tool used to create movie recommendations for users of websites like Netflix, for example.) Of the three structures, the best predictor of improvement with tutoring was a larger hippocampus, a structure traditionally considered one of the brain’s most important memory centers. Functional connections between the hippocampus and several other brain regions, especially the prefrontal cortex and basal ganglia, also predicted ability to benefit from tutoring. These regions are important for forming long-term memories.
"The part of the brain that is recruited in memories for places and events also plays a pivotal role in determining how much and how well a child learns math," Supekar said.
None of the neuropsychological assessment scores, such as IQ or tests of working memory, could predict how much an individual child would benefit from tutoring.
The brain systems highlighted by this study - including the hippocampus, basal ganglia and prefrontal cortex - are different from those previously implicated for math learning in adults, the researchers noted. When solving math problems, adults rely on brain regions that are specialized for representing complex visual objects and processing spatial information.
And the findings suggest that the tutoring approach used, which was tailored to each child’s level of understanding and included lots of repetitive, high-speed arithmetic practice to help cement facts in children’s heads, works because it is compatible with the way their brains encode facts. “Memory resources provided by the hippocampal system create a scaffold for learning math in the developing brain,” Menon said. “Our findings suggest that, while conceptual knowledge about numbers is necessary for math learning, repeated, speeded practice and testing of simple number combinations is also needed to encode facts and encourage children’s reliance on retrieval - the most efficient strategy for answering simple arithmetic problems.” Once kids are able to pull up answers to basic arithmetic problems automatically from memory, their brains can tackle more complex problems.
The researchers’ next steps will include comparing brain structure and wiring in children with and without math learning disabilities, analyzing how the wiring of the brain changes in response to tutoring and examining whether lower-performing children’s brains can be exercised to help them learn math. “We’re pushing a very ecologically relevant model of learning,” Menon said. “Academic instruction should rely on validated instructional principles while incorporating individualized training to provide feedback on whether students are right or wrong, how they’re wrong and how they can improve their math skills.”
Distortions and illusions within human memory are well documented in scientific and forensic work and appear to be a basic feature of memory functioning.

Yet several studies suggest that blind individuals, especially those without any visual experience, possess superior verbal and memory skills.
The researchers from the Department of Psychology ran memory tests on groups of congenitally blind people, those with late onset blindness and sighted people, in collaboration with a research assistant at Queen Mary, University of London.
Each participant was asked to listen to a series of word lists and then recall the words they heard. Past research has found that such words lists normally cause people to falsely “remember” words that are related to those heard, but that were never actually experienced. For example hearing ‘chimney’, ‘cigar’, and ‘fire’ can prompt some to produce a false memory of the word ‘smoke’ when asked to remember the list of words.
The researchers found that not only did the congenitally blind participants remember more words but were also less likely to create false memories of the related words. In contrast, the sighted and late blind participants remembered fewer words and were much more likely to falsely remember the related words that were not read to the participants.
Dr Achille Pasqualotto, postdoctoral researcher and first author of the study, said: “We found that congenitally blind participants reported significantly more correct words than both late onset blind and sighted people. Most of the congenitally blind participants avoided unrelated words, therefore congenitally blind participants can store more items and with a higher fidelity.”
Dr Michael Proulx who led the study added: “Our results show that visual experience has a significant negative impact on both the number of items remembered and the accuracy of semantic memory and also demonstrate the importance of adaptive neural plasticity in the congenitally blind brain for enhanced memory retrieval mechanisms.
“There is an old Hebrew proverb that believes the blind were the most trustworthy sources for quotations and that certainly seems true in this case. It will be interesting to see whether congenitally blind individuals would also be better witnesses in forensic studies.”
The researched is from the paper Congenital blindness improves semantic and episodic memory, published in the journal Behavioural Brain Research.
Neurons in the nose could be the key to early, fast, and accurate diagnosis, says a TAU researcher

A debilitating mental illness, schizophrenia can be difficult to diagnose. Because physiological evidence confirming the disease can only be gathered from the brain during an autopsy, mental health professionals have had to rely on a battery of psychological evaluations to diagnose their patients.
Now, Dr. Noam Shomron and Prof. Ruth Navon of Tel Aviv University’s Sackler Faculty of Medicine, together with PhD student Eyal Mor from Dr. Shomron’s lab and Prof. Akira Sawa of Johns Hopkins Hospital in Baltimore, Maryland, have discovered a method for physical diagnosis — by collecting tissue from the nose through a simple biopsy. Surprisingly, collecting and sequencing neurons from the nose may lead to “more sure-fire” diagnostic capabilities than ever before, Dr. Shomron says.
This finding, which was reported in the journal Neurobiology of Disease, could not only lead to a more accurate diagnosis, it may also permit the crucial, early detection of the disease, giving rise to vastly improved treatment overall.
From the nose to diagnosis
Until now, biomarkers for schizophrenia had only been found in the neuron cells of the brain, which can’t be collected before death. By that point it’s obviously too late to do the patient any good, says Dr. Shomron. Instead, psychiatrists depend on psychological evaluations for diagnosis, including interviews with the patient and reports by family and friends.
For a solution to this diagnostic dilemma, the researchers turned to the olfactory system, which includes neurons located on the upper part of the inner nose. Researchers at Johns Hopkins University collected samples of olfactory neurons from patients diagnosed with schizophrenia and a control group of non-affected individuals, then sent them to Dr. Shomron’s TAU lab.
Dr. Shomron and his fellow researchers applied a high-throughput technology to these samples, studying the microRNA of the olfactory neurons. Within these molecules, which help to regulate our genetic code, they were able to identify a microRNA which is highly elevated in those with schizophrenia, compared to individuals who do not have the disease.
"We were able to narrow down the microRNA to a differentially expressed set, and from there down to a specific microRNA which is elevated in individuals with the disease compared to healthy individuals," explains Dr. Shomron. Further research revealed that this particular microRNA controls genes associated with the generation of neurons.
In practice, material for biopsy could be collected through a quick and easy outpatient procedure, using a local anesthetic, says Dr. Shomron. And with microRNA profiling results ready in a matter of hours, this method could evolve into a relatively simple and accurate test to diagnose a very complicated illness.
Early detection, early intervention
Though there is much more to investigate, Dr. Shomron has high hopes for this diagnostic method. It’s important to determine whether this alteration in microRNA expression begins before schizophrenic symptoms begin to exhibit themselves, or only after the disease fully develops, he says. If this change comes near the beginning of the timeline, it could be invaluable for early diagnostics. This would mean early intervention, better treatment, and possibly even the postponement of symptoms.
If, for example, a person has a family history of schizophrenia, this test could reveal whether they too suffer from the disease. And while such advanced warning doesn’t mean a cure is on the horizon, it will help both patient and doctor identify and prepare for the challenges ahead.
The finding opens the door for presymptomatic diagnostics and genetic counselling for patients and it is the first step in identifying the cause and developing therapies

(Image: Antony Gormley)
Researchers from the Germans Trias i Pujol Health Sciences Research Institute Foundation (IGTP), the Bellvitge Biomedical Research Institute (IDIBELL), and the Sant Joan de Déu de Martorell Hospital, has identified a new subtype of ataxia, a rare disease without treatment that causes atrophy in the cerebellum and affects around 1.5 million people in the world. The results have been published online on April 29 in the journal JAMA Neurology.
The cause of ataxia is a diverse genetic alteration. For this reason it is classified in subtypes. The new subtype identified described by the researchers has been called SCA37. The study has found this subtype in members of the same family living in Barcelona, Huelva and Madrid and Salamanca (Spain). The finding will allow in the medium term that these families and all who suffer the genetic alteration identified will have personalized therapies and diagnostics prior to the development of the disease. The study was funded by La Marató de TV3 (the Catalan public TV) in 2009, dedicated to rare diseases.
The cerebellum is a part of the brain located behind the brain that, among other functions, coordinates the movements of the human body. When it is atrophied, movement disorders appear, and when the ataxia evolves, the patients suffer frequent falls and swallowing problems. Progressively, they end up needing a wheelchair. Until now, there have been identified more than 30 different subtypes of ataxia, the first of which was described in 1993 by Dr. Antoni Matill, head of the Neurogenetics Unit, IGTP, and Dr. Victor Volpini, head of the Center for Molecular Genetic Diagnosis at IDIBELL.
The publication of this paper has been possible thanks to the collaboration of the Hospital de Sant Pau, Universitat Pompeu Fabra and the Pitie-Salpêtrière Hospital in Paris.
Particular eye movements
The first symptoms of ataxia may develop during the childhood or adult stage, depending on the subtype. The SCA37 subtype, the first cases of which were identified by Carme Serrano, neurologist at the Sant Joan de Deu Hospital, Martorell (Barcelona), is expressed at 48 years on average. One of the features of SCA37 subtype is the difficulty for vertical eye movements. Besides the patients identified in Spain by Dr. Serrano and Germans Trias and IDIBELL researchers, there are evidence of the existence of more people affected with this subtype of ataxia in France, Holland and Britain, and for this reason it seems to be a quite prevalent subtype of ataxia in Europe.
All SCA37 patients have a common genetic alteration in the portion 32 of the short arm of chromosome 1, wherein there are a hundred genes. Currently, researchers are sequencing it with new generation technologies to find the specific mutation that causes ataxia. When it is found it will be possible to make an accurate diagnosis in family members who do not yet have developed symptoms. Also, it will be possible to investigate the biological mechanisms that cause ataxia in order to develop and implement personalized therapies, with drugs or stem cells therapy.
Today, during the 81st American Association of Neurological Surgeons (AANS) Annual Scientific Meeting, researchers announced new findings regarding the development of methods to turn human induced pluripotent stem cells (iPSC) into microglia, which could be used for not only research but potentially in treatments for various diseases of the central nervous system (CNS).
Microglia are the resident inflammatory cells of the CNS and can modulate the outcomes of a wide range of disorders including trauma, infections, stroke, brain tumors, and various degenerative, inflammatory and psychiatric diseases. However, the effective therapeutic use of microglia demonstrated in various animal CNS disease models currently cannot be translated to patients due to the lack of methods for procuring high-purity patient-specific microglia. Developing a method for obtaining these cells would be highly valuable.
In the study Differentiation of Induced Pluripotent Stem Cells to Microglia for Treatment of CNS Diseases, mouse and human iPSCs were generated and sequentially co-cultured on various cell monolayers and in the presence of added growth factors. The microglial identity of the resulting cells was confirmed using fluorescence activated cell sorting analyses, functional assays, gene expression analyses and brain engraftment ability. The study results will be shared by presenting author John K. Park, MD, PhD, FAANS, from 3:34-3:42 p.m. on Monday, April 29. Co-authors are Michael Shen, BS; Yong Choi, PhD; and Hetal Pandya, PhD.
In the results, researchers found mouse and human iPSCs co-cultured with OP9 cells differentiate into hematopoietic progenitor cells (HPCs). HPCs in turn co-cultured with astrocytes, generate cells that express CD11b, Iba-1 and CX3CR1; secrete the cytokines IL-6, IL-1ß and TNF-a; generate reactive oxygen species; and phagocytose fluorescent particles, all consistent with a microglial phenotype. Gene expression clustering using self-organizing maps indicates that iPSC-derived microglia more closely resemble normal microglia than other inflammatory cell types. The iPSC-derived microglia engraft and migrate to areas of injury within the brain. These finding have led researchers to conclude that iPSC-derived microglia may one day be useful as gene and protein delivery vehicles to the CNS.
“The actual results of our research were not surprising to us, but the overall importance of microglia in a wide variety of brain and spinal cord diseases was surprising. Microglia likely have a role in improving or worsening diseases such as multiple sclerosis, Alzheimer’s disease, Parkinson’s disease, obsessive compulsive disorder and Rett’s syndrome, just to name a few,” said John K. Park, MD, PhD, FAANS. “Microglia are the principal immune system cells of the brain and spinal cord, and help fight infections as well as help the healing process after injuries such as trauma and strokes. They also play a poorly understood role in many neurodegenerative and psychiatric diseases. We have developed methods to turn iPSCs into microglia. Because human iPSC can easily be obtained in large numbers, we can now generate large numbers of human microglia not only for use in experiments, but also potentially for use in treatments. The ability to study normal and diseased human microglia will lead to a greater understanding of their roles in healthy brains and various diseases. Diseases that are caused or exacerbated by defective microglia or a paucity of normal microglia may potentially be treated by microglia generated from a patient’s iPSC.”
A team of American and Italian neuroscientists has identified a cellular change in the brain that accompanies obesity. The findings could explain the body’s tendency to maintain undesirable weight levels, rather than an ideal weight, and identify possible targets for pharmacological efforts to address obesity.

The findings, published in the Proceedings of the National Academy of Sciences Early Edition this week, identify a switch that occurs in neurons within the hypothalamus. The switch involves receptors that trigger or inhibit the release of the orexin A peptide, which stimulates the appetite, among other behaviors. In normal-weight mice, activation of this receptor decreases orexin A release. In obese mice, activation of this receptor stimulates orexin A release.
"The striking finding is that you have a massive shift of receptors from one set of nerve endings impinging on these neurons to another set," said Ken Mackie, professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences at IU Bloomington. "Before, activating this receptor inhibited the secretion of orexin; now it promotes it. This identifies potential targets where an intervention could influence obesity."
The work is part of a longstanding collaboration between Mackie’s team at the Gill Center for Biomolecular Science at IU Bloomington and Vincenzo Di Marzo’s team at the Institute of Biomolecular Chemistry in Pozzuoli, Italy. Both teams study the endocannabinoid system, which is composed of receptors and signaling chemicals that occur naturally in the brain and have similarities to the active ingredients in cannabis, or marijuana. This neurochemical system is involved in a variety of physiological processes, including appetite, pain, mood, stress responses and memory.
Food consumption is controlled in part by the hypothalamus, a portion of the brain that regulates many essential behaviors. Like other important body systems, food consumption is regulated by multiple neurochemical systems, including the endocannabinoid system, representing what Mackie describes as a “balance of a very fine web of regulatory networks.”
An emerging idea, Mackie said, is that this network is reset during obesity so that food consumption matches maintenance of current weight, not a person’s ideal weight. Thus, an obese individual who loses weight finds it difficult to keep the weight off, as the brain signals the body to eat more in an attempt to return to the heavier weight.
Using mice, this study found that in obesity, CB1 cannabinoid receptors become enriched on the nerve terminals that normally inhibit orexin neuron activity, and the orexin neurons produce more of the endocannabinoids to activate these receptors. Activating these CB1 receptors decreases inhibition of the orexin neurons, increasing orexin A release and food consumption.
"This study identifies a mechanism for the body’s ongoing tendency to return to the heavier weight," Mackie said.
The researchers conducted several experiments with mice to understand how this change takes place. They uncovered a role of leptin, a key hormone made by fat cells that influences metabolism, hunger and food consumption. Obesity causes leptin levels to be chronically high, making brain cells less sensitive to its actions, which contributes to the molecular switch that leads to the overproduction of orexin.
Testosterone may trigger a brain chemical process linked to schizophrenia but the same sex hormone can also improve cognitive thinking skills in men with the disorder, two new studies show.

Scientists have long suspected testosterone plays an important role in schizophrenia, which affects more men than women. Men are also more likely to develop psychosis in adolescence, previous research has shown.
A new study on lab rodents by researchers from Neuroscience Research Australia analysed the impact increased testosterone had on levels of dopamine, a brain chemical linked to psychotic symptoms of schizophrenia.
The researchers found that testosterone boosted dopamine sensitivity in adolescent male rodents.
“From these rodent studies, we hypothesise that adolescent increases in circulating testosterone may be a driver of increased dopamine activity in the brains of individuals susceptible to psychosis and schizophrenia,” said senior Neuroscience Research Australia researcher and author of the study, Dr Tertia Purves-Tyson, who is presenting her work at the International Congress on Schizophrenia Research in Florida this week.
Dr Philip Mitchell, Scientia Professor and Head of the School of Psychiatry at the University of NSW, said the research was very interesting.
“The relationship between sex steroids, such as testosterone, and psychiatric disorders has long intrigued researchers. For example, we have known for many years that schizophrenia presents earlier in males than females, but the biological mechanism for this has been poorly understood,” said Dr Mitchell, who was not involved in the study.
“The rodent study by Professor Shannon Weickert from the School of Psychiatry at UNSW and NeuRA is therefore of particular interest. This study suggests an important interplay between circulating testosterone levels and the brain’s sensitivity to dopamine – a neurochemical which has been long implicated in the cause of schizophrenia,” said Dr Mitchell.
“This study suggests that it is the interplay between testosterone and dopamine which is critical. This is an important observation which may very well throw an important light on solving the puzzle of the biological causes of schizophrenia.”
Cognitive thinking
A separate study by Dr Thomas Weickert at Neuroscience Research Australia examined the role testosterone plays in the cognitive thinking skills of men with schizophrenia.
The researchers examined testosterone levels in a group of 29 chronically ill men with schizophrenia or schizoaffective disorder, and a control group of 20 healthy men and asked both groups to take a series of cognition tests.
“Circulating testosterone levels significantly predicted performance on verbal memory, processing speed, and working memory in men with schizophrenia … such that increased normal levels of testosterone were beneficial to thought processing in men with schizophrenia but circulating sex steroid levels did not appear to be related to cognitive function in healthy men,” the researchers reported.
“The results suggest that circulating sex steroids may influence thought processes in men with schizophrenia.”
Dr Melanie McDowall, a researcher at the University of Adelaide’s Robinson Institute, said the study added to a large body of evidence demonstrating a link between testosterone and schizophrenia.
“This is not surprising, given the link between testosterone and dopamine,” she said, adding that symptoms of schizophrenia predominantly began after puberty.
“However, as with most endocrine and mental illnesses, schizophrenia is multifaceted (genetic, environmental etc.), hence this may not be the be all and end.”
Neuroscientists at UB’s Hunter James Kelly Research Institute show how turning down synthesis of a protein improves nerve, muscle function in common neuropathy.

A potential new treatment strategy for patients with Charcot-Marie-Tooth disease is on the horizon, thanks to research by neuroscientists now at the University at Buffalo’s Hunter James Kelly Research Institute and their colleagues in Italy and England.
The institute is the research arm of the Hunter’s Hope Foundation, established in 1997 by Jim Kelly, Buffalo Bills Hall of Fame quarterback, and his wife, Jill, after their infant son Hunter was diagnosed with Krabbe Leukodystrophy, an inherited fatal disorder of the nervous system. Hunter died in 2005 at the age of eight. The institute conducts research on myelin and its related diseases with the goal of developing new ways of understanding and treating conditions such as Krabbe disease and other leukodystrophies.
Charcot-Marie-Tooth or CMT disease, which affects the peripheral nerves, is among the most common of hereditary neurological disorders; it is a disease of myelin and it results from misfolded proteins in cells that produce myelin.
The new findings were published online earlier this month in The Journal of Experimental Medicine.
They may have relevance for other diseases that result from misfolded proteins, including Alzheimer’s disease, Parkinson’s, multiple sclerosis, Type 1 diabetes, cancer and mad cow disease.
The paper shows that missteps in translational homeostasis, the process of regulating new protein production so that cells maintain a precise balance between lipids and proteins, may be how some genetic mutations in CMT cause neuropathy.
CMT neuropathies are common, hereditary and progressive; in severe cases, patients end up in wheelchairs. These diseases significantly affect quality of life but not longevity, taking a major toll on patients, families and society, the researchers note.
“It’s possible that our finding could lead to the development of an effective treatment not just for CMT neuropathies but also for other diseases related to misfolded proteins,” says Lawrence Wrabetz, MD, director of the institute and professor of neurology and biochemistry in UB’s School of Medicine and Biomedical Sciences and senior author on the paper. Maurizio D’Antonio, of the Division of Genetics and Cell Biology of the San Raffaele Scientific Institute in Milan is first author; Wrabetz did most of this research while he was at San Raffaele, prior to coming to UB.
The research finding centers around the synthesis of misfolded proteins in Schwann cells, which make myelin in nerves. Myelin is the crucial fatty material that wraps the axons of neurons and allows them to signal effectively. Many CMT neuropathies are associated with mutations in a gene known as P0, which glues the wraps of myelin together. Wrabetz has previously shown in experiments with transgenic mice that those mutations cause the myelin to break down, which in turn, causes degeneration of peripheral nerves and wasting of muscles.
When cells recognize that the misfolded proteins are being synthesized, cells respond by severely reducing protein production in an effort to correct the problem, Wrabetz explains. The cells commence protein synthesis again when a protein called Gadd34 gets involved.
“After cells have reacted to, and corrected, misfolding of proteins, the job of Gadd34 is to turn protein synthesis back on,” says Wrabetz. “What we have shown is that once Gadd34 is turned back on, it activates synthesis of proteins at a level that’s too high—that’s what causes more problems in myelination.
“We have provided proof of principle that Gadd34 causes a problem with translational homeostasis and that’s what causes some neuropathies,” says Wrabetz. “We’ve shown that if we just reduce Gadd34, we actually get better myelination. So, leaving protein synthesis turned partially off is better than turning it back on, completely.”
In both cultures and a transgenic mouse model of CMT neuropathies, the researchers improved myelin by reducing Gadd34 with salubrinal, a small molecule research drug. While salubrinal is not appropriate for human use, Wrabetz and colleagues at UB and elsewhere are working to develop derivatives that are appropriate.
“If we can demonstrate that a new version of this molecule is safe and effective, then it could be part of a new therapeutic strategy for CMT and possibly other misfolded protein diseases as well,” says Wrabetz.
And while CMT is the focus of this particular research, the work is helping scientists at the Hunter James Kelly Research Institute enrich their understanding of myelin disorders in general.
“What we learn in one disease, such as CMT, may inform how we think about toxins for others, such as Krabbe’s,” Wrabetz says. “We’d like to build a foundation and answer basic questions about where and when toxicity in diseases begin.”
The misfolded protein diseases are an interesting and challenging group of diseases to study, he continues. “CMT, for example, is caused by mutations in more than 40 different genes,” he says. “When there are so many different genes involved and so many different mechanisms, you have to find a unifying mechanism: this problem of Gadd34 turning protein synthesis on at too high a level could be one unifying mechanism. The hope is that this proof of principle applies to more than just CMT and may lead to improved treatments for Alzheimer’s, Parkinson’s, Type 1 diabetes and the other diseases caused by misfolded proteins.”
During fetal development of the mammalian brain, the cerebral cortex undergoes a marked expansion in surface area in some species, which is accommodated by folding of the tissue in species with most expanded neuron numbers and surface area. Researchers have now identified a key regulator of this crucial process.

Different regions of the mammalian brain are devoted to the performance of specific tasks. This in turn imposes particular demands on their development and structural organization. In the vertebrate forebrain, for instance, the cerebral cortex – which is responsible for cognitive functions – is remarkably expanded and extensively folded exclusively in mammalian species. The greater the degree of folding and the more furrows present, the larger is the surface area available for reception and processing of neural information. In humans, the exterior of the developing brain remains smooth until about the sixth month of gestation. Only then do superficial folds begin to appear and ultimately dominate the entire brain in humans. Conversely mice, for example, have a much smaller and smooth cerebral cortex.
“The mechanisms that control the expansion and folding of the brain during fetal development have so far been mysterious,” says Professor Magdalena Götz, a professor at the Institute of Physiology at LMU and Director of the Institute for Stem Cell Research at the Helmholtz Center Munich. Götz and her team have now pinpointed a major player involved in the molecular process that drives cortical expansion in the mouse. They were able to show that a novel nuclear protein called Trnp1 triggers the enormous increase in the numbers of nerve cells which forces the cortex to undergo a complex series of folds. Indeed, although the normal mouse brain has a smooth appearance, dynamic regulation of Trnp1 results in activating all necessary processes for the formation of a much enlarged and folded cerebral cortex.
Levels of Trnp1 control expansion and folding
“Trnp1 is critical for the expansion and folding of the cerebral cortex, and its expression level is dynamically controlled during development,” says Götz. In the early embryo, Trnp1 is locally expressed in high concentrations. This promotes the proliferation of self-renewing multipotent neural stem cells and supports tangential expansion of the cerebral cortex. The subsequent fall in levels of Trnp1 is associated with an increase in the numbers of various intermediate progenitors and basal radial glial cells. This results in the ordered formation and migration of a much enlarged number of neurons forming folds in the growing cortex.
The findings are particularly striking because they imply that the same molecule – Trnp1 – controls both the expansion and the folding of the cerebral cortex and is even sufficient to induce folding in a normally smooth cerebral cortex. Trnp1 therefore serves as an ideal starting point from which to dissect the complex network of cellular and molecular interactions that underpin the whole process. Götz and her colleagues are now embarking on the next step in this exciting journey - determination of the molecular function of this novel nuclear protein Trnp1 and how it is regulated. (Cell 2013)
Despite decades of research, relatively little is known about the identity of RNA molecules that are transported as part of the molecular process underpinning learning and memory.
Now, working together, scientists from the Florida campus of The Scripps Research Institute (TSRI), Columbia University and the University of Florida, Gainesville, have developed a novel strategy for isolating and characterizing a substantial number of RNAs transported from the cell-body of neuron (nerve cell) to the synapse, the small gap separating neurons that enables cell to cell communication.
Using this new method, the scientists were able to identify nearly 6,000 transcripts (RNA sequences) from the genome of Aplysia, a sea slug widely used in scientific investigation.
The scientists’ target is known as the synaptic transcriptome—roughly the complete set of RNA molecules transported from the neuronal cell body to the synapse.
In the study, published recently in the journal Proceedings of the National Academy of Sciences, the scientists focused on the RNA transport complexes that interact with the molecular motor kinesin; kinesin proteins move along filaments known as microtubules in the cell and carry various gene products during the early stage of memory storage.
While neurons use active transport mechanisms such as kinesin to deliver RNA cargos to synapses, once they arrive at their synaptic destination that service stops and is taken over by other, more localized mechanisms—in much the same way that a traveler’s bags gets handed off to the hotel doorman once the taxi has dropped them at the entrance.
The scientists identified thousands of these unique sequences of both coding and noncoding RNAs. As it turned out, several of these RNAs play key roles in the maintenance of synaptic function and growth.
The scientists also uncovered several antisense RNAs (paired duplicates that can inhibit gene expression), although what their function at the synapse might be remains unknown.
“Our analyses suggest that the transported RNAs are surprisingly diverse,” said Sathya Puthanveettil, a TSRI assistant professor who designed the study. “It also brings up an important question of why so many different RNAs are transported to synapses. One reason may be that they are stored there to be used later to help maintain long-term memories.”
The team’s new approach offers the advantage of avoiding the dissection of neuronal processes to identify synaptically localized RNAs by focusing on transport complexes instead, Puthanveettil said. This new approach should help in better understanding changes in localized RNAs and their role in local translation as molecular substrates, not only in memory storage, but also in a variety of other physiological conditions, including development.
“New protein synthesis is a prerequisite for maintaining long term memory,” he said, “but you don’t need this kind of transport forever, so it raises many questions that we want to answer. What molecules need to be synthesized to maintain memory? How long is this collection of RNAs stored? What localized mechanisms come into play for memory maintenance? ”
Melatonin injections delayed symptom onset and reduced mortality in a mouse model of the neurodegenerative condition amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, according to a new study by researchers at the University of Pittsburgh School of Medicine. In a report published online ahead of print in the journal Neurobiology of Disease, the team revealed that receptors for melatonin are found in the nerve cells, a finding that could launch novel therapeutic approaches.
Annually about 5,000 people are diagnosed with ALS, which is characterized by progressive muscle weakness and eventual death due to the failure of respiratory muscles, said senior investigator Robert Friedlander, M.D., UPMC Endowed Professor of neurosurgery and neurobiology and chair, Department of Neurological Surgery, Pitt School of Medicine. But the causes of the condition are not well understood, thwarting development of a cure or even effective treatments.
Melatonin is a naturally occurring hormone that is best known for its role in sleep regulation. After screening more than a thousand FDA-approved drugs several years ago, the research team determined that melatonin is a powerful antioxidant that blocks the release of enzymes that activate apoptosis, or programmed cell death.
"Our experiments show for the first time that a lack of melatonin and melatonin receptor 1, or MT1, is associated with the progression of ALS," Dr. Friedlander said. "We saw similar results in a Huntington’s disease model in an earlier project, suggesting similar biochemical pathways are disrupted in these challenging neurologic diseases."
Hoping to stop neuron death in ALS just as they did in Huntington’s, the research team treated mice bred to have an ALS-like disease with injections of melatonin or with a placebo. Compared to untreated animals, the melatonin group developed symptoms later, survived longer, and had less degeneration of motor neurons in the spinal cord.
"Much more work has to be done to unravel these mechanisms before human trials of melatonin or a drug akin to it can be conducted to determine its usefulness as an ALS treatment," Dr. Friedlander said. "I suspect that a combination of agents that act on these pathways will be needed to make headway with this devastating disease."
Using the fruit fly as a model organism, neurobiologists from the Friedrich Miescher Institute for Biomedical Research have identified the L1-type CAM neuroglian as an important regulator for synapse growth, function and stability. They show that the interaction of neuroglian with ankyrin provides a regulatory module to locally control synaptic connectivity and function.

A Drosophila neuromuscular junction. Motoneuron membrane (blue), synaptic vesicles (green), postsynaptic density (red)
From its earliest beginnings until an organism’s death, the nervous system changes. Connections between nerve cells are formed, stabilized and disassembled not only during the development of the brain in the womb and in early childhood, but also in adults as they learn or form memories. In this flow of change, cell adhesion molecules (CAMs), which mediate cell-cell interactions, are thought to provide stability and guidance in a Velcro-like-manner as synapses change.
Jan Pielage and his group at the Friedrich Miescher Institute for Biomedical Research have carried out an unbiased genetic screen to identify cell adhesion molecules that control synapse maintenance and plasticity, using the fruit fly, Drosophila. As they publish in the latest issue of PLOS Biology, they identified the cell adhesion molecule called neuroglian as a key regulator for synapse stability.
Neuroglian is a transmembrane protein with a large extracellular domain and an intracellular signaling domain. Through the extracellular domain interactions with CAMs on neighboring cells are established. This stabilizes the site and is a prerequisite for synapse formation. “We think that the extracellular interactions of neuroglian are essential for neurite outgrowth and axon targeting during early development,” explains Pielage.
The scientists could then show that the intracellular domain, which interacts with the adaptor molecule called ankyrin, modulates the stability of synapses. At the neuromuscular junction, where nerve cells innervate the muscle, the strength of the interaction of neuroglian with ankyrin modulates the balance between synapse growth and stability. As the binding affinity of ankyrin for neuroglian decreased, e.g. due to phosphorylation, the mobility of neuroglian within the motorneuron increased. This change in mobility caused the destabilization of synapses but at the same time, it allowed the formation of new synapses at other places. “This organization permits easy regulation, and allows the fine tuning of synaptic connectivity along one nerve cell without disrupting the neuronal network or impairing overall circuit stability,” said Pielage.
In the central nervous system, where synapses are formed between two neurons, a homophilic interaction of neuroglian is required to establish the contact between pre- and postsynaptic neurons. A differential regulation of ankyrin binding is then necessary to coordinate transsynaptic development and to enable synapse maturation and function. “Modulation of the neuroglian-ankyrin interaction might enable local and precise control of synaptic connectivity,” comments Pielage.
This comprehensive structure function study provides a molecular basis for previous observations linking mutations in the ankyrin binding domain of the human homologue of neuroglian, L1CAM, to neurological L1/CRASH disorders that include mental retardation.
Scientists funded by the National Institutes of Health have discovered a potential strategy for developing treatments to stem the disease process in Alzheimer’s disease. It’s based on unclogging removal of toxic debris that accumulates in patients’ brains, by blocking activity of a little-known regulator protein called CD33.

“Too much CD33 activity appears to promote late-onset Alzheimer’s by preventing support cells from clearing out toxic plaques, key risk factors for the disease,” explained Rudolph Tanzi, Ph.D., of Massachusetts General Hospital and Harvard University, a grantee of the NIH’s National Institute of Mental Health (NIMH) and National Institute on Aging (NIA). “Future medications that impede CD33 activity in the brain might help prevent or treat the disorder.”
Tanzi and colleagues report on their findings April 25, 2013 in the journal Neuron.
“These results reveal a previously unknown, potentially powerful mechanism for protecting neurons from damaging toxicity and inflammation,” said NIMH Director Thomas R. Insel, M.D. “Given increasing evidence of overlap between brain disorders at the molecular level, understanding such workings in Alzheimer’s disease may also provide insights into other mental disorders.”
Variation in the CD33 gene turned up as one of four prime suspects in the largest genome-wide dragnet of Alzheimer’s-affected families, reported by Tanzi and colleagues in 2008. The gene was known to make a protein that regulates the immune system, but its function in the brain remained elusive. To discover how it might contribute to Alzheimer’s, the researchers brought to bear human genetics, biochemistry and human brain tissue, mouse and cell-based experiments.
They found over-expression of CD33 in support cells, called microglia, in postmortem brains from patients who had late-onset Alzheimer’s disease, the most common form of the illness. The more CD33 protein on the cell surface of microglia, the more beta-amyloid protein and plaques – damaging debris – had accumulated in their brains. Moreover, the researchers discovered that brains of people who inherited a version of the CD33 gene that protected them from Alzheimer’s conspicuously showed reduced amounts of CD33 on the surface of microglia and less beta-amyloid.
Brain levels of beta-amyloid and plaques were also markedly reduced in mice engineered to under-express or lack CD33. Microglia cells in these animals were more efficient at clearing out the debris, which the researchers traced to levels of CD33 on the cell surface.
Evidence also suggested that CD33 works in league with another Alzheimer’s risk gene in microglia to regulate inflammation in the brain.
The study results – and those of a recent rat study that replicated many features of the human illness – add support to the prevailing theory that accumulation of beta-amyloid plaques are hallmarks of Alzheimer’s pathology. They come at a time of ferment in the field, spurred by other recent contradictory evidence suggesting that these presumed culprits might instead play a protective role.
Since increased CD33 activity in microglia impaired beta-amyloid clearance in late onset Alzheimer’s, Tanzi and colleagues are now searching for agents that can cross the blood-brain barrier and block it.
Researcher Johan Jakobsson and his colleagues have now published their results in Nature Communications.
At present, researchers know very little about exactly how microglia work. At the same time, there is a lot of curiosity and high hopes among brain researchers that greater understanding of microglia could lead to entirely new drug development strategies for various brain diseases”, says Johan Jakobsson, research group leader at the Division of Molecular Neurogenetics at Lund University.
What the researchers have now succeeded in identifying is a deviation in the structure of the microglia cells, which makes it possible to visualise them and study their behaviour. By inserting a luminescent protein controlled by a microscopic molecule, microRNA-9, the researchers can now distinguish the microglia and monitor their function over time in the brains of rats and mice.
It has long been known that microglia form the first line of defence of the immune system in diseases of the brain. They move quickly to the affected area and release an arsenal of molecules that protect the nerve cells and clear away damaged tissue.
New research also suggests that microglia not only guard the nerve cells but also play an important role in their basic function.
“This represents a real step forward in technological development. Now we can view microglia in a way that has not been possible before. We and our colleagues now hope to be able to use this technique to study the role of the cells in different disease models, for example Parkinson’s disease and stroke, in which microglia are believed to play an important role”, explains Johan Jakobsson.
An Autistica consultation published this month found that 24% of children with autism were non-verbal or minimally verbal, and it is known that these problems can persist into adulthood. Professionals have long attempted to support the development of language in these children but with mixed outcomes. An estimated 600,000 people in the UK and 70 million worldwide have autism, a neuro-developmental condition which is life-long.
Today, scientists at the University of Birmingham publish a paper in Frontiers in Neuroscience showing that while not all of the current interventions used are effective, there is real hope for progress by using interventions based on understanding natural language development and the role of motor and “motor mirroring” behaviour in toddlers.
The researchers, led by Dr Joe McCleery, who is supported by autism research charity Autistica, examined over 200 published papers and more than 60 different intervention studies, and found that:
With the support of Autistica, the UK’s leading autism research charity, Dr McCleery’s team have now embarked on new work which builds on these findings to design interventions which specifically target the aspects of development where there are deficits in non-verbal autistic children.
Dr McCleery says: “We feel that the field is approaching a turning point, with potentially dramatic breakthroughs to come in both our understanding of communication difficulties in people with autism, and the potential ways we can intervene to make a real difference for those children who are having difficulties learning to speak.”
Christine Swabey, CEO of Autistica, says: “80% of the parents in our recent consultation wanted interventions straight after diagnosis. Dr McCleery’s work shows how critical it is for all intervention to be evidence-based, and that the best approaches are based on a real understanding of the development of difficulties in autism. We are proud to be supporting the next steps in this vital research which will improve the quality of life for people with autism.”
Alison Hardy, whose son Alfie is six, says: “As a parent of an autistic child, who is non-verbal, I feel quite vulnerable. People are always saying “try this, it worked wonders for us”. But you can’t try everything. We need a proper, scientific evidence base for what works and what does not. Then we can focus our time and our effort, with some confidence that we have a chance of helping our children. The publication of this research is an exciting step in giving us that confidence, it is great that Autistica is supporting this vital work.”
Scientists at the Nencki Institute of Experimental Biology of the Polish Academy of Sciences in Warsaw investigate mice with a very precisely modified genome. Because it is possible to turn off the Dicer gene in adult mice, they can be used to investigate the processes related to such cognitive functions such as learning and memory. Also Nencki scientists have just shown that the new transgenic mouse is suitable to study metabolic dysfunctions resulting in obesity.

Studies on the Dicer gene and its impact on the cognitive and metabolic processes are currently carried out at the Nencki Institute’s Laboratory of Animal Models, a core facility in the newly established Neurobiology Center. The Center has been built on Campus Ochota in Warsaw as part of a large European project called the Centre for Preclinical Research and Technology (CePT). This project, financed from the Operational Programme Innovative Economy, brings together 10 research institutions from Warsaw.
“No one needs convincing that knowledge about the function of individual human genes is absolutely fundamental in biology as well as medicine”, says Dr Witold Konopka, head of the Laboratory of Animal Models. “But how do we determine a gene’s function, if no genetic modifications in humans are allowed? The only method is to create an animal, for example a mouse with genes turned on or off to model the studied illness. This is easy to say, but difficult to do, especially when the involved genes are really important for each cell”.
For several years Dr Konopka has been involved in research on the Dicer gene in mice. This gene, the analogue of which can be found also in the human genome, is responsible for creating a protein which reduces RNA molecules to short, 20-nucleotide fragments, important in regulating the activity of other genes. The Dicer gene needs to be active for proper functioning of the cell. It cannot be simply turned off in zygote, because the resulting defect would make the proper development of the zygote impossible.
Preparation of a transgenic mouse, in which the Dicer gene could be blocked in adulthood, takes a year and a half. This process starts with surrounding the Dicer gene on the DNA chain with two sequences known as loxP. This is done on stem cells, which are then injected into the embryo. Since the Dicer gene remains active, the embryo develops normally. At the same time the animal zygote of the opposite sex is injected with a gene coding a protein known as recombinase Cre-ERT2. Molecules of this protein consist of a part containing the Cre enzyme and a fragment reacting to a chemical compound called tamoxifen, which prior to such reaction prohibits recombinase Cre-ERT2 from penetrating into the cell nucleus.
Adult mice of both types are then cross bred for progeny, which will inherit the Dicer gene surrounded with the loxP sequences as well as the gene coding for recombinase from its parents. A mouse of this type has been created thanks to a joint effort of research groups from different world research centres such as the German Cancer Research Center (DKFZ) in Germany or the Imperial College London in the United Kingdom.
In order to turn off the Dicer gene in such adult mouse, it is enough to administer tamoxifen to them for a few days, which accumulates in neurons and allows the recombinase to penetrate into the cell nucleus. The Cre enzyme recognises the loxP sequence and removes the coding fragment with the Dicer gene.
“The first mice, in which the Dicer gene could be switched off at any time, were received by me a few years ago during my postdoctoral fellowship in the German Cancer Research Center in Heidelberg. Currently we breed such mice also the Nencki’s Laboratory of Animal Models. But breeding such animals constitutes only a part of the task. If we want to use them for research, they have to be appropriately characterized”, explains Dr Konopka.
Traits of mice used for scientific research have to be well known. Without such knowledge researchers cannot determine whether a change observed in the appearance or behaviour of the animal is related to turning off the gene. “Two years ago we have characterized the cognitive processes of these new mice. We have determined that after turning off the Dicer gene the animals showed better memory than the controls”, says Dr Konopka. But about five months after deleting the Dicer gene from the brain, the mice scored below the level of the control group on their cognitive abilities, which could be related to dying neurons devoid of the Dicer gene. Currently scientists have just finished analysing changes occurring in metabolic processes of those new mice, which for 3-4 weeks after turning off the Dicer gene eat more and gain weight faster, whereupon their appetite goes back to normal, but higher weight of their bodies’ remains.
“Before we have established with the required accuracy, how our mice learn and remember. Now we are certain, that the same mice can be used to investigate obesity and we plan to do that soon. But in our new lab we will not only conduct studies on disease models. We would also like to generate new transgenic animals for other research centres”, emphasizes Dr Konopka.
BRAIN initiative aims to improve tools for studying neurons to answer questions about human thought and behavior
The images appearing on the computer screen were almost too detailed and fast-moving to take in, Misha B. Ahrens remembers. He and colleague Philipp J. Keller were recording the activity of about 80,000 neurons in a live zebrafish brain, the first time something on this scale had been done. Cross-sectional pictures of the young fish’s head flew by, dotted with splotches of light.
The Howard Hughes Medical Institute (HHMI) neuroscientists were using a zebrafish larva with a fluorescent protein inserted in its neurons, and the protein was lighting up every time the cells fired. Their custom-built microscope imaged and recorded the resulting lightning storm in the fish’s brain in real time.
Ahrens commemorated the milestone experiment—which took place nearly seven months ago in a lab at the institute’s Janelia Farm Research Campus outside Washington, D.C.—by filming it with his iPhone. “It was mind-blowing to see the entire brain flash past our eyes,” he remembers.
Keller sat in awe at the computer, repeatedly pulling up and admiring slices of data the high-speed apparatus was collecting. The translucent zebrafish, immobilized in a glass tube filled with gel and nestled among the microscope’s optics, was completely unaware that its neural processing was causing such a stir.
Up until that point, scientists had been able to record simultaneous activity from only about 2 to 3% of the 100,000 neurons in a young zebrafish’s head, Keller says. He and Ahrens managed to capture 80%—a giant leap for fishkind.
On March 18, the duo reported their brain-imaging feat online at Nature Methods. Just 15 days later, President Barack Obama announced a large-scale neuroscience initiative to study the dynamics of brain circuits (C&EN, April 8, page 9).
Unlike the Human Connectome Project—a federal program that strives to uncover a static map of the brain’s circuits—this new initiative aims to uncover those circuits’ activity and interplay. BRAIN (Brain Research through Advancing Innovative Neurotechnologies), as the project is called, will get $100 million in federal support if Obama’s request is granted (see page 25), and it will get a similar amount from private foundations such as HHMI in 2014.
“It was a coincidence,” Keller says of the timing of the proposal. He and Ahrens weren’t involved in developing BRAIN, but their goal—to record all the activity from all the neurons in a simple organism’s brain at once—falls directly in line with the initiative.

Eighty-thousand neurons is a lot. But it’s nothing compared with the 85 billion nerve cells that humans have in their brains, or even the 75 million that mice have. To make the leap to measuring large swaths of the brain circuits of rodents or even humans, BRAIN researchers will need to develop new methods of measuring neuronal activity. They are already working on molecular tags to more accurately indicate nerve cell firing in real time. And scientists are developing miniaturized probes to monitor brain cells without disturbing the organ itself, as well as faster techniques for analyzing the flood of data generated by such a huge number of neurons.
Some imaging methods that monitor multitudes of neurons, like that of Ahrens and Keller, already exist. As do techniques for probing scads of nerve cells with tiny electrodes. BRAIN will likely build on these technologies, experts say. But it will also shoot to build “dream” technologies such as implantable nanomaterials that transmit the activity of individual neurons from inside the head.
At the moment, however, no one knows the exact scope of BRAIN. The National Institutes of Health has already appointed a team of neuroscientists to draw up a blueprint for what should be a multiyear initiative. Other federal agencies involved—the National Science Foundation and the Defense Advanced Research Projects Agency—have yet to announce their strategies.
“Neuroscience is getting to the point where researchers cannot take the next big step to understand neural circuits armed with traditional technology,” says Rafael Yuste, a neuron-imaging expert at Columbia University.
And taking that step, he argues, is vital to understanding human thought. “We have a suspicion that the brain is an emerging system,” Yuste says. In other words, how the brain produces memories or actions involves the interactions of all its neurons, rather than just one or even 1,000. It’s like watching television, Yuste adds. “You need to see all the pixels, or at least most of them, to figure out what’s playing.”
Along with five other scientists, Yuste made the original pitch for a public-private project to map the brain’s dynamics in a 2012 article in Neuron. The group argued that not only could this approach help reveal how the human mind works, but it might also offer some insight into what happens when the brain malfunctions. Knowing how the brain’s circuits are supposed to function, Yuste says, could help pinpoint what’s going wrong in conditions such as schizophrenia, which likely involve faulty circuitry.
BRAIN proponents also say areas outside of science and medicine could profit from the initiative. If successful, they claim, BRAIN could yield economic benefits similar to the Human Genome Project, a program launched in 1990 to sequence all the base pairs in a person’s DNA. “Every dollar we spent to map the human genome has returned $140 to our economy,” President Obama noted when he announced BRAIN.
As was the case for the Human Genome Project, BRAIN has been criticized by many scientists. In an already-tight fiscal climate, some researchers have voiced worries that paying for the initiative will mean losing their own funds. And others have expressed reservations that the project is going after too many neurons to yield interpretable, useful results.
But no one seems to dispute that better tools to record activity from nerve cells is a worthwhile goal. “There’s definitely room to grow in many of the techniques we use to record brain activity,” says Mark J. Schnitzer, a neuroscientist at Stanford University. So far, he says, progress has been made mainly by individual labs doing their own thing. But to get to the next level more rapidly, a coordinated effort like BRAIN—centers and labs of neuroscientists, chemists, and researchers in other disciplines working together—might be the ticket.
Until recently, the number of neurons being recorded simultaneously in experiments was doubling every seven years, according to a 2011 review in Nature Neuroscience. But the Janelia team blew this trend out of the water with its high-speed camera and microscope, which rapidly illuminates and images slices of the brain.
The Janelia experiment worked primarily because zebrafish larvae are transparent to light and can be easily immobilized without negative consequences to their brain activity. But moving to mice, which have more neurons and a light-impenetrable skull, will require some more serious innovation, Keller adds.

Some researchers have designed implantable prisms and fiber-optic probes to direct light into the depths of the mouse brain. But those optical tricks are still limited to measuring a few hundred neurons at once. Plus, the mouse has to be tethered to the fibers or prevented from moving altogether.
Stanford’s Schnitzer has overcome the mobility issue with a miniaturized microscope that he and his team designed to fit onto a mouse’s head. Standing three-quarters of an inch tall, the lightweight device, which contains its own light source and camera, gets implanted into the rodent’s brain, enabling researchers to track the freely moving animal’s nerve cell activity.
Early this year, Schnitzer’s group used the setup to follow the dynamics of roughly 1,000 neurons in a mouse’s brain for more than a month (Nat. Neurosci., DOI: 10.1038/nn.3329). The team learned that neurons in one part of the mouse’s brain fired in similar patterns whenever the mouse returned to a familiar spot in its enclosure.
Still, such optical techniques are invasive. “The most elegant experiment would be done from the outside, without mechanical disturbance to the brain,” Columbia’s Yuste says. He’d like to see BRAIN help develop new light sources that can penetrate farther into brain tissue than a few millimeters.
Also on Yuste’s neuron-imaging wish list is a better way to indicate cell firing. As in the Janelia experiment and Schnitzer’s microscope study, the imaging of neuronal activity is typically carried out with calcium indicators. These are molecules that move to the insides of neurons or are proteins engineered to reside there, both designed to fluoresce when they bind to calcium ions.
As a nerve cell fires, its ion channels open, allowing calcium ions to trickle inside and trigger the indicators.
However, “calcium imaging is flawed,” Yuste says. “It’s an indirect method of tracking neuronal firing.” The indicators can’t tell scientists whether a nerve cell fired a little or a lot, he argues. And they don’t track the cells’ electrical activity in real time because calcium diffusion and binding are comparatively slow.
So Yuste and others are working to develop dyes or nanomaterials, called voltage indicators, that bind within a neuron’s membrane and optically signal the cell’s electrical status. Progress is slow-going, however, because a cell’s membrane can hold only so many indicators on its surface and the resulting signal is low.
Another way neuroscientists are more directly measuring nerve cells’ electrical activity is with miniaturized electrodes and nanowires. These probes measure, at submillisecond speeds, the electrical current emitted by a neuron when it fires.

“But anytime you plunge anything into the brain, you have to worry about tissue damage,” says Sotiris Masmanidis, a neurobiologist at the University of California, Los Angeles. “The concern is, how much are you perturbing the system you’re studying?”
To minimize tissue disturbance, Masmanidis and others are lithographically fabricating arrays of microelectrodes that can record nerve cells’ electrical signals from 50 to 100 µm away. So far, the UCLA researcher says, electrode arrays are capable of measuring, at most, 100 to 1,000 neurons at a time.
Determining what types of nerve cells an arrayed microelectrode is measuring, however, is not exactly straightforward, given that it blindly measures any neuron in its vicinity, Masmanidis says. To figure it out, scientists have to take extra steps and monitor the cells’ reaction to drugs or other modulators.
But what good is measuring the dynamics of a slew of nerve cells without having any idea why they’re firing? BRAIN supporters think one way of getting an answer to which environmental cues or perceptions trigger certain neuronal activity patterns is a technique called optogenetics.

Hailed by Nature Methods as the “method of the year” in 2010, optogenetics enables scientists to activate particular nerve cells in the brains of animals with light. The researchers first engineer light-activated proteins into a mouse’s neurons and then trigger the macromolecules via fiber-optic arrays implanted in the rodent’s brain.
Once researchers have measured a firing pattern from an animal’s nerve cells, they can later play it back to see what happens, says Edward S. Boyden, an optogenetics pioneer and neurobiologist at Massachusetts Institute of Technology. “Once we ‘dial’ an activity pattern into the brain,” he says, “if we see that it’s enough to drive some behavior, that could be quite powerful for understanding which parts of the brain drive specific functions.”
Researchers have already been optogenetically stimulating clusters of a few hundred cells in mice, investigating the rodents’ decision-making abilities and aggressive tendencies.
But a brain is more than just electrical activity, says Anne M. Andrews, a psychiatry professor at UCLA. It also uses at least 100 types of neurotransmitters that are involved in triggering neuronal activity at cell junctions, or synapses. “If we want to understand how information is encoded in neuronal signaling, we have to study chemical neurotransmission at the level of synapses,” Andrews says.
And what better way to do that than with nanotechnology? asks Paul S. Weiss, a chemist and nanoscience expert, also at UCLA. After all, the junctions between neurons are just 10 nm wide, he adds.
Andrews and Weiss are hoping BRAIN will support the development of nanoscale sensors to measure the chemical activity at synapses. And they’re already in talks with UCLA’s Masmanidis to functionalize channels on his microelectrodes with molecules that could sense neurotransmitters.
No matter what BRAIN ends up encompassing, one thing is clear: Advances in the numbers of neurons monitored will necessitate improvements in data analysis and storage.
Take, for instance, the experiment done at Janelia. That single session of recording from a zebrafish brain generated 1 terabyte of data. “So you can fit two or three experiments on a computer hard drive,” Ahrens says. “It’s not a bottleneck yet, but when we start creating faster microscopes, computational power might become a problem.”
He and Keller also have just scratched the surface when it comes to analyzing the data they obtained from their initial experiments. As they reported in their Nature Methods paper, the pair found a circuit in the fish’s hindbrain functionally coupled to a specific part of its spinal cord. But determining what that means and what the rest of the brain is doing will require more study and help from computational neuroscientists.
“It’s apparent that to really understand what the brain is doing, you need to have as complete information as you can,” Ahrens says. “It’s a good goal to have, to measure as many neurons as possible.” But it’s a challenging one.A multicenter study led by scientists at the University of Pittsburgh School of Medicine shows that mild traumatic brain injury after blast exposure produces inflammation, oxidative stress and gene activation patterns akin to disorders of memory processing such as Alzheimer’s disease. Their findings were recently reported in the online version of the Journal of Neurotrauma.
Blast-induced traumatic brain injury (TBI) has become an important issue in combat casualty care, said senior investigator Patrick Kochanek, M.D., professor and vice chair of critical care medicine and director of the Safar Center for Resuscitation Research at Pitt. In many cases of mild TBI, MRI scans and other conventional imaging technology do not show overt damage to the brain.
“Our research reveals that despite the lack of a lot of obvious neuronal death, there is a lot of molecular madness going on in the brain after a blast exposure,” Dr. Kochanek said. “Even subtle injuries resulted in significant alterations of brain chemistry.”
The research team developed a rat model to examine whether mild blast exposure in a device called a shock tube caused any changes in the brain even if there was no indication of direct cell death, such as bleeding. Brain tissues of rats exposed to blast and to a sham procedure were tested two and 24 hours after the injury.
Gene activity patterns, which shifted over time, resembled patterns seen in neurodegenerative diseases, particularly Alzheimer’s, Dr. Kochanek noted. Markers of inflammation and oxidative stress, which reflects disruptions of cell signaling, were elevated, but there was no indication of energy failure that would be seen with poor tissue oxygenation.
“It appears that although the neurons don’t die after a mild injury, they do sustain damage,” he said. “It remains to be seen what multiple exposures, meaning repeat concussions, do to the brain over the long term.”