Posts tagged science

Posts tagged science
New study shows what happens in the brain to make music rewarding
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital – The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music.
Participants in the study listened to 60 previously unheard music excerpts while undergoing functional resonance imaging (fMRI) scanning, providing bids of how much they were willing to spend for each item in an auction paradigm. “When people listen to a piece of music they have never heard before, activity in one brain region can reliably and consistently predict whether they will like or buy it, this is the nucleus accumbens which is involved in forming expectations that may be rewarding,” says lead investigator Dr. Valorie Salimpoor, who conducted the research in Dr. Robert Zatorre’s lab at The Neuro and is now at Baycrest Health Sciences’ Rotman Research Institute. “What makes music so emotionally powerful is the creation of expectations. Activity in the nucleus accumbens is an indicator that expectations were met or surpassed, and in our study we found that the more activity we see in this brain area while people are listening to music, the more money they are willing to spend.”
The second important finding is that the nucleus accumbens doesn’t work alone, but interacts with the auditory cortex, an area of the brain that stores information about the sounds and music we have been exposed to. The more a given piece was rewarding, the greater the cross-talk between these regions. Similar interactions were also seen between the nucleus accumbens and other brain areas, involved in high-level sequencing, complex pattern recognition and areas involved in assigning emotional and reward value to stimuli.
In other words, the brain assigns value to music through the interaction of ancient dopaminergic reward circuitry, involved in reinforcing behaviours that are absolutely necessary for our survival such as eating and sex, with some of the most evolved regions of the brain, involved in advanced cognitive processes that are unique to humans.
“This is interesting because music consists of a series of sounds that when considered alone have no inherent value, but when arranged together through patterns over time can act as a reward, says Dr. Robert Zatorre, researcher at The Neuro and co-director of the International Laboratory for Brain, Music and Sound Research. “The integrated activity of brain circuits involved in pattern recognition, prediction, and emotion allow us to experience music as an aesthetic or intellectual reward.”
“The brain activity in each participant was the same when they were listening to music that they ended up purchasing, although the pieces they chose to buy were all different,” adds Dr. Salimpoor. “These results help us to see why people like different music – each person has their own uniquely shaped auditory cortex, which is formed based on all the sounds and music heard throughout our lives. Also, the sound templates we store are likely to have previous emotional associations.”
An innovative aspect of this study is how closely it mimics real-life music-listening experiences. Researchers used a similar interface and prices as iTunes. To replicate a real life scenario as much as possible and to assess reward value objectively, individuals could purchase music with their own money, as an indication that they wanted to hear it again. Since musical preferences are influenced by past associations, only novel music excerpts were selected (to minimize explicit predictions) using music recommendation software (such as Pandora, Last.fm) to reflect individual preferences.
The interactions between nucleus accumbens and the auditory cortex suggest that we create expectations of how musical sounds should unfold based on what is learned and stored in our auditory cortex, and our emotions result from the violation or fulfillment of these expectations. We are constantly making reward-related predictions to survive, and this study provides neurobiological evidence that we also make predictions when listening to an abstract stimulus, music, even if we have never heard the music before. Pattern recognition and prediction of an otherwise simple set of stimuli, when arranged together become so powerful as to make us happy or bring us to tears, as well as communicate and experience some of the most intense, complex emotions and thoughts.
(Image: Peter Finnie and Ben Beheshti)
Getting a grip on hand function: Discovering key spinal cord circuits
Professor and neurosurgeon Dr. Rob Brownstone and postdoctoral fellow Dr. Tuan Bui have identified the spinal cord circuit that controls the hands’ ability to grasp.
The world’s leading neuroscience journal, Neuron, published the breakthrough finding in its latest issue.
The researchers have found that a certain population of neurons in the spinal cord — called the dI3 interneurons — assess information from sensory neurons in the hands and then send the appropriate signals to motor neurons in the spinal cord, and hence to the muscles, to control the hands’ grip.
Importance of hand-grip control
“This circuit allows us to subtly and unconsciously adjust our grasp so we apply the right amount of force to whatever we’re holding,” says Dr. Brownstone, a professor in the Department of Medical Neurosciences and the Division of Neurosurgery. “This mechanism is disrupted in spinal cord injuries, which can completely eliminate the ability to grasp, and in neurodegenerative diseases like Alzheimer’s disease, which can lead to an uncontrollable reflexive grasp such that people grab and can’t let go of what they touch.”
Impaired hand function has a devastating effect on people’s independence and ability to function in daily life. As Dr. Brownstone points out, people with quadriplegia ranked hand function as their number-one priority, when asked in a 2004 survey which function they would most want to recover if they could. They rated hand function well above trunk stability, walking, sexual function, bladder and bowel control, and normal sensation.
An unexpected finding
Drs. Brownstone and Bui were testing a spinal cord circuit for its role in the rhythmic pattern of walking, when they found it controlled hand grip instead. “The mice with this circuit disrupted were walking just fine, but I found it was unusually easy to remove them from their cages,” recounts Dr. Bui. “Mice will usually grab onto the cage wires when you go take them out, so this really got us thinking.”
While Dr. Bui was pondering the meaning of this unexpected observation in the lab, Dr. Brownstone was in his neurosurgery clinic, assessing a patient who was unable to control her grasp. “When she took my hand, she was unable to let go,” he recalls. “I had to peel her fingers off one by one to release my hand.”
As they compared notes, Drs. Brownstone and Dr. Bui quickly realized they had come across the circuit that controls hand grasp. Struck by the implications of their observations, they embarked on a series of experiments — with collaborators, including Dr. Tom Jessell at Columbia University in New York City — which validated the finding.
A path to future treatments
Now that the researchers have identified the specific spinal cord circuit that controls hand grip, they can go on to find targets for potential treatments for impaired hand function. “It’s possible that a neurotransmitter or other agent could be delivered to the spinal cord to correct the faulty circuit,” notes Dr. Brownstone. “It could be a complex strategy, but understanding is always the first step.”
Dr. Brownstone is a Tier 1 Canada Research Chair in spinal cord circuits. His research is also supported through grants from the Canadian Institutes of Health Research. Dr. Bui is a key member of Dr. Brownstone’s research team in the Motor Control Lab at Dalhousie University, where they are identifying the neural circuits that control our ability to walk and move in coordinated ways. Their ultimate goal is to identify targets for therapies to restore lost motor function and control in people with spinal cord injuries and other neurological diseases.
The hippocampus in schizophrenia is characterized by both hypermetabolism and reduced size. It remains unknown whether these abnormalities are mechanistically linked. Here we addressed this question by using MRI tools that can map hippocampal metabolism and structure in patients and mouse models. In at-risk patients, hypermetabolism was found to begin in CA1 and spread to the subiculum after psychosis onset. CA1 hypermetabolism at baseline predicted hippocampal atrophy, which occurred during progression to psychosis, most prominently in similar regions. Next, we used ketamine to model conditions of acute psychosis in mice. Acute ketamine reproduced a similar regional pattern of hypermetabolism, while repeated exposure shifted the hippocampus to a hypermetabolic basal state with concurrent atrophy and pathology in parvalbumin-expressing interneurons. Parallel in vivo experiments using the glutamate-reducing drug LY379268 and direct measurements of extracellular glutamate showed that glutamate drives both neuroimaging abnormalities. These findings show that hippocampal hypermetabolism leads to atrophy in psychotic disorder and suggest glutamate as a pathogenic driver.
Scientists at The Scripps Research Institute (TSRI) have shed light on one of the major toxic mechanisms of Alzheimer’s disease. The discoveries could lead to a much better understanding of the Alzheimer’s process and how to prevent it.
The findings, reported in the April 10, 2013 issue of the journal Neuron, show that brain damage in Alzheimer’s disease is linked to the overactivation of an enzyme called AMPK. When the scientists blocked this enzyme in mouse models of the disease, neurons were protected from loss of synapses—neuron-to-neuron connection points—typical of the early phase of Alzheimer’s disease.
“These findings open up many new avenues of investigation, including the possibility of developing therapies that target the upstream mechanisms leading to AMPK overactivation in the brain,” said TSRI Professor Franck Polleux, who led the new study.
Alzheimer’s disease, a fatal neurodegenerative disorder afflicting more than 25 million people worldwide, currently has no cure or even disease-delaying therapy.
In addition to having implications for Alzheimer’s drug discovery, Polleux noted the findings suggest the need for further safety studies on an existing drug, metformin. Metformin, apopular treatment for Type 2 Diabetes, causes AMPK activation.
Tantalizing Clues to Alzheimer’s
Researchers have known for years that people in the earliest stages of Alzheimer’s disease begin to lose synapses in certain memory-related brain areas. Small aggregates of the protein amyloid beta can cause this loss of synapses, but how they do so has been a mystery.
Until recently, Polleux’s laboratory has been focused not on Alzheimer’s research but on the normal development and growth of neurons. In 2011, he and his colleagues reported that AMPK overactivation by metformin, among other compounds, in animal models impaired the ability of neurons to grow output stalks, or axons.
Around the same time, separate research groups found clues that AMPK might also have a role in Alzheimer’s disease. One group reported that AMPK can be activated in neurons by amyloid beta, which in turn can cause a modification of the protein tau in a process known as phosphorylation. Tangles of tau with multiple phosphorylations (“hyperphosphorylated” tau) are known to accumulate in neurons in affected brain areas in Alzheimer’s. These results, published two years ago, reported abnormally high levels of activated AMPK in these tangle-ridden neurons.
Polleux decided to investigate further, to determine whether the reported interactions of AMPK with amyloid beta and tau can in fact cause the damage seen in the brains of Alzheimer’s patients. “Very little was known about the function of this AMPK pathway in neurons, and we happened to have all the tools needed to study it,” he said.
In Search of Answers
Georges Mairet-Coello, a postdoctoral research associate in the Polleux lab, performed most of the experiments for the new study. He began by confirming that amyloid beta, in the small-aggregate (“oligomer”) form that is toxic to synapses, does indeed strongly activate AMPK; amyloid beta oligomers stimulate certain neuronal receptors, which in turn causes an influx of calcium ions into the neurons. He found that this calcium influx triggers the activation of an enzyme called CAMKK2, which appears to be the main activator of AMPK in neurons.
The team then showed that this AMPK overactivation in neurons is the essential reason for amyloid beta’s synapse-harming effect. Normally, the addition of amyloid beta oligomers to a culture of neurons causes the swift disappearance of many of the neurons’ dendritic spines—the rootlike, synapse-bearing input stalks that receive signals from other neurons. With a variety of tests, the scientists showed that amyloid beta oligomers can’t cause this dendritic spine loss unless AMPK overactivation occurs—and indeed AMPK overactivation on its own can cause the spine loss.
For a key experiment the team used J20 mice, which are genetically engineered to overproduce mutant amyloid beta, and eventually develop an Alzheimer’s-like condition. “When J20 mice are only three months old, they already show a strong decrease in dendritic spine density, in a set of memory-related neurons that are also affected early in human Alzheimer’s,” Mairet-Coello said. “But when we blocked the activity of CAMKK2 or AMPK in these neurons, we completely prevented the spine loss.”
Next Mairet-Coello investigated the role of the tau protein. Ordinarily it serves as a structural element in neuronal axons, but in Alzheimer’s it somehow becomes hyperphosphorylated and drifts into other neuronal areas, including dendrites where its presence is associated with spine loss. Recent studies have shown that amyloid beta’s toxicity to dendritic spines depends largely on the presence of tau, but just how the two Alzheimer’s proteins interact has been unclear.
The team took a cue from a 2004 study of Drosophila fruit flies, in which an AMPK-like enzyme’s phosphorylation of specific sites on the tau protein led to a cascade of further phosphorylations and the degeneration of nerve cells. The scientists confirmed that one of these sites, S262, is indeed phosphorylated by AMPK. They then showed that this specific phosphorylation of tau accounts to a significant extent for amyloid beta’s synapse toxicity. “Blocking the phosphorylation at S262, by using a mutant form of tau that can’t be phosphorylated at that site, prevented amyloid beta’s toxic effect on spine density,” Mairet-Coello said.
The result suggests that amyloid beta contributes to Alzheimer’s via AMPK, mostly as an enabler of tau’s toxicity.
More Studies Ahead
Mairet-Coello, Polleux and their colleagues are now following up with further experiments to determine what other toxic processes, such as excessive autophagy, are promoted by AMPK overactivation and might also contribute to the long-term aspects of Alzheimer’s disease progression. They are also interested in the long-term effects of blocking AMPK overactivation in the J20 mouse model as well as in other mouse models of Alzheimer’s disease, which normally develop cognitive deficits at later stages. “We already have contacts within the pharmaceuticals industry who are potentially interested in targeting either CAMKK2 or AMPK,” says Polleux.
The other contributors to the study, “The CAMKK2-AMPK kinase pathway mediates the synaptotoxic effects of amyloid beta oligomers through tau phosphorylation,” were Julien Courchet, Simon Pieraut, Virginie Courchet and Anton Maximov, all of TSRI.
(Source: scripps.edu)

Flies reveal that a sense of smell, like a melody, depends upon timing
The sense of smell remains a mystery in many respects. Fragrance companies, for instance, know it is crucial that chemical compounds in perfumes reach nostrils at different rates to create the desired sensory experience, but it is has been unclear why. Yale researchers decided to interrogate the common fruit fly for answers.
The team of Yale scientist Thierry Emonet, his postdoctoral associate Carlotta Martelli, and his colleague John Carlson systematically recorded both the stimulus reaching the fly and the responses of individual neurons over time. They found that the timing of neuronal response was independent of the concentration of the odor in the air, which in theory might help flies track fluctuating odor stimuli. However, the timing of neuronal response did depend on the identity of the odor.
Different odors elicited tiny delays in neural response. Such odor-dependent delays could be useful to the brain processing complex scents, say the scientists. The research also shows that specific interactions between odors and surfaces can affect the timing of the stimulus and therefore neural response.
Emonet says the findings suggest the world of smell is like music, in which chemical compounds of the scent act as notes and enable recognition of specific odors depending upon when they are played, or processed. For more information on the research, see the April 9 issue of the journal Neuroscience.
Do the brains of different people listening to the same piece of music actually respond in the same way? An imaging study by Stanford University School of Medicine scientists says the answer is yes, which may in part explain why music plays such a big role in our social existence.

(Image: Anthony Ellis)
The investigators used functional magnetic resonance imaging to identify a distributed network of several brain structures whose activity levels waxed and waned in a strikingly similar pattern among study participants as they listened to classical music they’d never heard before. The results will be published online April 11 in the European Journal of Neuroscience.
"We spend a lot of time listening to music — often in groups, and often in conjunction with synchronized movement and dance," said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study’s senior author. "Here, we’ve shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention."
The notion that healthy subjects respond to complex sounds in the same way, Menon said, could provide novel insights into how individuals with language and speech disorders might listen to and track information differently from the rest of us.
The new study is one in a series of collaborations between Menon and co-author Daniel Levitin, PhD, a psychology professor at McGill University in Montreal, dating back to when Levitin was a visiting scholar at Stanford several years ago.
To make sure it was music, not language, that study participants’ brains would be processing, Menon’s group used music that had no lyrics. Also excluded was anything participants had heard before, in order to eliminate the confounding effects of having some participants who had heard the musical selection before while others were hearing it for the first time. Using obscure pieces of music also avoided tripping off memories such as where participants were the first time they heard the selection.
The researchers settled on complete classical symphonic musical pieces by 18th-century English composer William Boyce, known to musical cognoscenti as “the English Bach” because his late-baroque compositions in some respects resembled those of the famed German composer. Boyce’s works fit well into the canon of Western music but are little known to modern Americans.
Next, Menon’s group recruited 17 right-handed participants (nine men and eight women) between the ages of 19 and 27 with little or no musical training and no previous knowledge of Boyce’s works. (Conventional maps of brain anatomy are based on studies of right-handed people. Left-handed people’s brains tend to deviate from that map.)
While participants listened to Boyce’s music through headphones with their heads maintained in a fixed position inside an fMRI chamber, their brains were imaged for more than nine minutes. During this imaging session, participants also heard two types of “pseudo-musical” stimuli containing one or another attribute of music but lacking in others. In one case, all of the timing information in the music was obliterated, including the rhythm, with an effect akin to a harmonized hissing sound. The other pseudo-musical input involved maintaining the same rhythmic structure as in the Boyce piece but with each tone transformed by a mathematical algorithm to another tone so that the melodic and harmonic aspects were drastically altered.
The team identified a hierarchal network stretching from low-level auditory relay stations in the midbrain to high-level cortical brain structures related to working memory and attention, and beyond that to movement-planning areas in the cortex. These regions track structural elements of a musical stimulus over time periods lasting up to several seconds, with each region processing information according to its own time scale.
Activity levels in several different places in the brain responded similarly from one individual to the next to music, but less so or not at all to pseudo-music. While these brain structures have been implicated individually in musical processing, their identifications had been obtained by probing with artificial laboratory stimuli, not real music. Nor had their coordination with one another been previously observed.
Notably, subcortical auditory structures in the midbrain and thalamus showed significantly greater synchronization in response to musical stimuli. These structures have been thought to passively relay auditory information to higher brain centers, Menon said. “But if they were just passive relay stations, their responses to both types of pseudo-music would have been just as closely synchronized between individuals as to real music.” The study demonstrated, for the first time, that those structures’ activity levels respond preferentially to music rather than to pseudo-music, suggesting that higher-level centers in the cortex direct these relay stations to closely heed sounds that are specifically musical in nature.
The fronto-parietal cortex, which anchors high-level cognitive functions including attention and working memory, also manifested intersubject synchronization — but only in response to music and only in the right hemisphere.
Interestingly, the structures involved included the right-brain counterparts of two important structures in the brain’s left hemisphere, Broca’s and Geschwind’s areas, known to be crucial for speech and language interpretation.
"These right-hemisphere brain areas track non-linguistic stimuli such as music in the same way that the left hemisphere tracks linguistic sequences," said Menon.
In any single individual listening to music, each cluster of music-responsive areas appeared to be tracking music on its own time scale. For example, midbrain auditory processing centers worked more or less in real time, while the right-brain analogs of the Broca’s and Geschwind’s areas appeared to chew on longer stretches of music. These structures may be necessary for holding musical phrases and passages in mind as part of making sense of a piece of music’s long-term structure.
"A novelty of our work is that we identified brain structures that track the temporal evolution of the music over extended periods of time, similar to our everyday experience of music listening," said postdoctoral scholar Daniel Abrams, PhD, the study’s first author.
The preferential activation of motor-planning centers in response to music, compared with pseudo-music, suggests that our brains respond naturally to musical stimulation by foreshadowing movements that typically accompany music listening: clapping, dancing, marching, singing or head-bobbing. The apparently similar activation patterns among normal individuals make it more likely our movements will be socially coordinated.
"Our method can be extended to a number of research domains that involve interpersonal communication. We are particularly interested in language and social communication in autism," Menon said. "Do children with autism listen to speech the same way as typically developing children? If not, how are they processing information differently? Which brain regions are out of sync?"
(Source: eurekalert.org)
Lights, Chemistry, Action: New Method for Mapping Brain Activity
Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.
"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.
The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners.
The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy.
The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.
"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response."
The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques.
The method can reveal even more precise effects.
"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.
Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.
"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”
After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.
"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.
See-through brains clarify connections
Technique to make tissue transparent offers three-dimensional view of neural networks.
A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.
“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.
The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.
The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light. Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acrylamide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.
Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.
“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.
Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.
Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.
The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into reusable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return.
Subconscious mental categories help brain sort through everyday experiences
Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.
But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).
Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.
So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.
This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.
"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."
Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.
The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.
"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.
"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."
During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.
After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.
The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.
The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.
The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.
"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."
Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.
"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."
The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.
"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Spring cleaning in your brain: U-M stem cell research shows how important it is
Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.
Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.
In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.
It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.
The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.
In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.
As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.
“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”
Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.
Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.
A long path to discovery
The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.
Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.
Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.
“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.
In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.
But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.
“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.
Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.