Posts tagged CLARITY

Posts tagged CLARITY
Seeing the inner workings of the brain made easier by new technique
Last year Karl Deisseroth, a Stanford professor of bioengineering and of psychiatry and behavioral sciences, announced a new way of peering into a brain – removed from the body – that provided spectacular fly-through views of its inner connections. Since then laboratories around the world have begun using the technique, called CLARITY, with some success, to better understand the brain’s wiring.
However, Deisseroth said that with two technological fixes CLARITY could be even more broadly adopted. The first problem was that laboratories were not set up to reliably carry out the CLARITY process. Second, the most commonly available microscopy methods were not designed to image the whole transparent brain. “There have been a number of remarkable results described using CLARITY,” Deisseroth said, “but we needed to address these two distinct challenges to make the technology easier to use.”
In a Nature Protocols paper published June 19, Deisseroth presented solutions to both of those bottlenecks. “These transform CLARITY, making the overall process much easier and the data collection much faster,” he said. He and his co-authors, including postdoctoral fellows Raju Tomer and Li Ye and graduate student Brian Hsueh, anticipate that even more scientists will now be able to take advantage of the technique to better understand the brain at a fundamental level, and also to probe the origins of brain diseases.
This paper may be the first to be published with support of the White House BRAIN Initiative, announced last year with the ambitious goal of mapping the brain’s trillions of nerve connections and understanding how signals zip through those interconnected cells to control our thoughts, memories, movement and everything else that makes us us.
"This work shares the spirit of the BRAIN Initiative goal of building new technologies to understand the brain – including the human brain," said Deisseroth, who is also a Stanford Bio-X affiliated faculty member.
Eliminating fat
When you look at the brain, what you see is the fatty outer covering of the nerve cells within, which blocks microscopes from taking images of the intricate connections between deep brain cells. The idea behind CLARITY was to eliminate that fatty covering while keeping the brain intact, complete with all its intricate inner wiring.
The way Deisseroth and his team eliminated the fat was to build a gel within the intact brain that held all the structures and proteins in place. They then used an electric field to pull out the fat layer that had been dissolved in an electrically charged detergent, leaving behind all the brain’s structures embedded in the firm water-based gel, or hydrogel. This is called electrophoretic CLARITY.
The electric field aspect was a challenge for some labs. “About half the people who tried it got it working right away,” Deisseroth said, “but others had problems with the voltage damaging tissue.” Deisseroth said that this kind of challenge is normal when introducing new technologies. When he first introduced optogenetics, which allows scientists to control individual nerves using light, a similar proportion of labs were not initially set up to easily implement the new technology, and ran into challenges.
To help expand the use of CLARITY, the team devised an alternate way of pulling out the fat from the hydrogel-embedded brain – a technique they call passive CLARITY. It takes a little longer, but still removes all the fat, is much easier and does not pose a risk to the tissue. “Electrophoretic CLARITY is important for cases where speed is critical, and for some tissues,” said Deisseroth, who is also the D.H. Chen Professor. “But passive CLARITY is a crucial advance for the community, especially for neuroscience.” Passive CLARITY requires nothing more than some chemicals, a warm bath and time.
Many groups have begun to apply CLARITY to probe brains donated from people who had diseases like epilepsy or autism, which might have left clues in the brain to help scientists understand and eventually treat the disease. But scientists, including Deisseroth, had been wary of trying electrophoretic CLARTY on these valuable clinical samples with even a very low risk of damage. “It’s a rare and precious donated sample, you don’t want to have a chance of damage or error,” Deisseroth said. “Now the risk issue is addressed, and on top of that you can get the data very rapidly.”
Fast CLARITY imaging in color
The second advance had to do this rapidity of data collection. In studying any cells, scientists often make use of probes that will go into the cell or tissue, latch onto a particular molecule, then glow green, blue, yellow or other colors in response to particular wavelengths of light. This is what produces the colorful cellular images that are so common in biology research. Using CLARITY, these colorful structures become visible throughout the entire brain, since no fat remains to block the light.
But here’s the hitch. Those probes stop working, or get bleached, after they’ve been exposed to too much light. That’s fine if a scientist is just taking a picture of a small cellular structure, which takes little time. But to get a high-resolution image of an entire brain, the whole tissue is bathed in light throughout the time it takes to image it point by point. This approach bleaches out the probes before the entire brain can be imaged at high resolution.
The second advance of the new paper addresses this issue, making it easier to image the entire brain without bleaching the probes. “We can now scan an entire plane at one time instead of a point,” Deisseroth said. “That buys you a couple orders of magnitude of time, and also efficiently delivers light only to where the imaging is happening.” The technique is called light sheet microscopy and has been around for a while, but previously didn’t have high enough resolution to see the fine details of cellular structures. “We advanced traditional light sheet microscopy for CLARITY, and can now see fine wiring structures deep within an intact adult brain,” Deisseroth said. His lab built their own microscope, but the procedures are described in the paper, and the key components are commercially available. Additionally, Deisseroth’s lab provides free training courses in CLARITY, modeled after his optogenetics courses, to help disseminate the techniques.
Brain imaging to help soldiers
The BRAIN Initiative is being funded through several government agencies including the Defense Advanced Research Projects Agency (DARPA), which funded Deisseroth’s work through its new Neuro-FAST program. Deisseroth said that like the National Institute of Mental Health (NIMH, another major funder of the new paper), DARPA “is interested in deepening our understanding of brain circuits in intact and injured brains to inform the development of better therapies.” The new methods Deisseroth and his team developed will accelerate both human- and animal-model CLARITY; as CLARITY becomes more widely used, it will continue to help reveal how those inner circuits are structured in normal and diseased brains, and perhaps point to possible therapies.
Other arms of the BRAIN Initiative are funded through the National Science Foundation (NSF) and the National Institutes of Health (NIH). A working group for the NIH arm was co-led by William Newsome, professor of neurobiology and director of the Stanford Neurosciences Institute, and also included Deisseroth and Mark Schnitzer, associate professor of biology and of applied physics. That group recently recommended a $4.5 billion investment in the BRAIN Initiative over the next 12 years, which NIH Director Francis Collins approved earlier this month.
In addition to funding by DARPA and NIMH, the work was funded by the NSF, the National Institute on Drug Abuse, the Simons Foundation and the Wiegers Family Fund.
This week over 150 neuroscientists were invited to meet in Arlington, Virginia to discuss the finer points of President Obama’s recently announced BRAIN Initative. Rather than discuss funding particulars, each participant was given the chance to broadly declare what they thought needed to be done in neuroscience. At least 75 of the participants initially responded to a request for a short white paper outlining the major obstacles currently impeding neuroscience research. A live webcast of some of the key talks was available, although many of the smaller workshops were held in private. Fortunately, updates regarding the content discussed at these workshops was posted live to twitter under the handle @openconnectome. This precipitated lively discussion, primarily under the hashtags #nsfBRAINmtg or #braini, and provided a way for a larger audience to be involved.
The working title of this inaugural NSF meeting was Physical and Mathematical Principles of Brain Structure and Function. In actuality, there was little discussion of all that, and for good reason—no such principles have been shown to exist. Even more concerning, only a few principles have ever even been proposed. Simplistic scaling laws dealing with connectivity, particularly within sensory systems or the cortex, have been suggested in the past. Generally they seek to account for only one or two structural parameters at a time, like for example, axon diameter and branching order. Typically, the chosen parameters are only considered in the context of optimizing a single physical variable, like for example, electrotonic function. While these efforts are a start, they usually do not garner much attention from the larger neuroscience community.
The early days of neuroscience were marked with the assertion of many principles and laws. They served well to focus ideas, but over time, they lost much of their original perceived generality. For example, concepts like one transmitter type per neuron, and no new neurons in adult brains later proved to have significant exceptions. The early breakthrough days in neuroscience have now given way to a grant system that stifles imagination, and by its competitiveness, encourages fraud. Many of the speakers at the BRAIN Initiative meeting have called for new tools and theories, but in most cases, they have offered only little has been offered. Instead of expanding the range acceptable pursuits, their vision appears to have imploded inward with calls for increased rigor, statistical power, diversity of animal models, experimental falsifiability, and most of all, data, on an increasingly limited range of ideas.
A lot of talk was given to the resolution at which connectivity, and activity maps should be detailed. Similar points were made for the need to develop electrode arrays of higher density and durability to more accurately record function. The ample discussion of an ideal animal model was punctuated by the notable advances made this year in whole brain recordings from Zebrafish, and also from large scale connectivity mapping now possible in small mammals with the new CLARITY transparent brain techniques. The general lack of agreement and clear path forward as to which organisms among many are ideal here was noted by representatives from several funding bodies who spoke at the meeting. Highlighting points made earlier in a talk by George Whitesides, they stressed the need to come to forward with a concrete plan that is comprehensible not only to the funding organizations, but the larger public as well.
Many discussions focused on brain mechanisms, like for example, how many neurons might contribute to a particular function. One participate, David Kleinfeld, called for a study of how many neurons are involved in communication at different scales. He also stressed the importance of looking at basic systems involving feedback, such as the brain stem and spinal cord, and their dynamic interaction with muscle. Michael Stryker observed that the goal should not be recording from the most neurons, and storing the most data, but rather finding the right neurons.
While it was not explicitly stated, a lot of the talk begged the conclusion that the answers to the questions we have will not be answered with animal studies. Knowing what a neuron does is itself an ill-posed question. In worms and flies, where the inputs and outputs of single neurons can be mapped to static sensory and motor functions in the real world, we might know what that neuron does. However in larger, human brains, we can ask an even better question—what does the neuron feel like? In most cases that answer will likely be, nothing.
If however, in a given human brain, a single neuron critically poised within that brain’s structural hierarchy can be stimulated to observable effect, some measure of its function has been gained. That effect might be a simple itch or twitch. Less plausibly perhaps it could be seeing a picture of a face undergo a change, sensing fear, or even imagining your grandmother. If that turns out not to be possible for most single neurons, we already know that we can find some minimal group of neurons where stimulation has uniquely perceivable effects.
While understanding the brain on different scales is important, the most rewarding endeavors likely exist where functionality can be correlated across those scales. Behavior at the scale of the organism within a given environment is readily observable. At the next scale down, the behavior of neurons witnessed by its spikes and structural alterations, is only observable now in part. Below the scale of the neuron, the mitochondria and other organelles move with a purpose and relation to activity of the neuron that has only been imagined, but is experimentally addressable.
Several speakers also mentioned the idea of a neural code. Spikes are a convenient metric for assessing brain activity, and we should seek to correlate their occurrence with behaviors on various scales mentioned above. They are a universal and non-local currency, among others in the brain, that inflates rapidly with stimulation and arousal. Unfortunately, the most logical conclusion for us must be that there is no code for spikes. Anyone attempting to observe and record a code for one neuron would probably find that it has, in short order, become unrecognizable, particularly in the context of the next. There are however constraints on spikes, and on neurons, and while considerable mention of the word was made at the meeting none were detailed in depth.
To formulate constraints on a system, at a level we don’t understand, we might look at constraints on other systems that we have some knowledge about. Neurons are neither wholly like ants, nor tress, but share some aspects of both. Similarly brains are neither like ant colonies, or forests, but shares some features in common. The most obvious constraint that comes to mind, and applies to these systems at every level, is energy. A subtle refinement of that is the concept of entropy generation. One key idea is that entropy generation at different scales, while proceeding according to as yet determined laws, need not necessarily maximize entropy at each point in time, but rather along paths through time.
A voice heard throughout the conference was that of Bill Bialek who diffusely observed that attempts to apply the laws of statistical mechanics to aspects of brain functions are not very productive because the brain is not at an equilibrium state. That would have been a good sentence to begin the conference perhaps rather than end it. Hopefully, the next NSF meeting will be a little more transparent to the public than the first. A more thorough webcast, with uploading to a media channel would be desirable to many who like to participate, as would a path for two-way communication on the issues. Mention should also be made of the efforts of a few neuroscientists peripheral to the BRAIN Initiative that have been maintaining important blog discussions, and metablog publication lists to track the progress made over last few months. This morning, NIH announced a new website has just been set up to provide additional public feedback.
(Source: medicalxpress.com)
See-through brains clarify connections
Technique to make tissue transparent offers three-dimensional view of neural networks.
A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.
“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.
The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.
The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light. Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acrylamide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.
Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.
“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.
Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.
Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.
The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into reusable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return.