Posts tagged brain mapping

Posts tagged brain mapping
UI study documents the illness’s effect on brain tissue
It’s hard to fully understand a mental disease like schizophrenia without peering into the human brain. Now, a study by University of Iowa psychiatry professor Nancy Andreasen uses brain scans to document how schizophrenia impacts brain tissue as well as the effects of anti-psychotic drugs on those who have relapses.
Andreasen’s study, published in the American Journal of Psychiatry, documented brain changes seen in MRI scans from more than 200 patients beginning with their first episode and continuing with scans at regular intervals for up to 15 years. The study is considered the largest longitudinal, brain-scan data set ever compiled, Andreasen says.
Schizophrenia affects roughly 3.5 million people, or about one percent of the U.S. population, according to the National Institutes of Health. Globally, some 24 million are affected, according to the World Health Organization.
The scans showed that people at their first episode had less brain tissue than healthy individuals. The findings suggest that those who have schizophrenia are being affected by something before they show outward signs of the disease.

“There are several studies, mine included, that show people with schizophrenia have smaller-than-average cranial size,” explains Andreasen, whose appointment is in the Carver College of Medicine. “Since cranial development is completed within the first few years of life, there may be some aspect of earliest development—perhaps things such as pregnancy complications or exposure to viruses—that on average, affected people with schizophrenia.”
Andreasen’s team learned from the brain scans that those affected with schizophrenia suffered the most brain tissue loss in the two years after the first episode, but then the damage curiously plateaued—to the group’s surprise. The finding may help doctors identify the most effective time periods to prevent tissue loss and other negative effects of the illness, Andreasen says.
The researchers also analyzed the effect of medication on the brain tissue. Although results were not the same for every patient, the group found that in general, the higher the anti-psychotic medication doses, the greater the loss of brain tissue.
“This was a very upsetting finding,” Andreasen says. “We spent a couple of years analyzing the data more or less hoping we had made a mistake. But in the end, it was a solid finding that wasn’t going to go away, so we decided to go ahead and publish it. The impact is painful because psychiatrists, patients, and family members don’t know how to interpret this finding. ‘Should we stop using antipsychotic medication? Should we be using less?’”
The group also examined how relapses could affect brain tissue, including whether long periods of psychosis could be toxic to the brain. The results suggest that longer relapses were associated with brain tissue loss.
The insight could change how physicians use anti-psychotic drugs to treat schizophrenia, with the view that those with the disorder can lead productive lives with the right balance of care.
“We used to have hundreds of thousands of people chronically hospitalized. Now, most are living in the community, and this is thanks to the medications we have,” Andreasen notes. “But antipsychotic treatment has a negative impact on the brain, so … we must get the word out that they should be used with great care, because even though they have fewer side effects than some of the other medications we use, they are certainly not trouble free and can have lifelong consequences for the health and happiness of the people and families we serve.”
(Source: now.uiowa.edu)
Do glial connectomes and activity maps make any sense?
"If all you have is a hammer, everything looks like a nail." This so-called "law of the instrument" has shaped neuroscience to core. It can be rephrased as, if all you have a fancy voltmeter, everything looks like a transient electrical event. No one in the field understands this more Douglass Fields, an NIH researcher who has re-written every neuroscience dogma he has turned his scrupulous eye to. In a paper published yesterday in Nature, Fields questions the conventional wisdom that informs recent efforts to map the brain’s connectivity, and ultimately, its electrical activity. In particular, he questions the value of making detailed maps of neurons, while at the same time neglecting the more abundant, and equally complex “maps” that exist for glia.
When first discovered, the “action potential” generated by a neuron was a rich and multiphysical event. It has since degenerated into a sterile, directionally-rectified electrical blip, whose only interesting parameter is a millisecond-scrutinized timestamp. In the last two years alone, Fields has re-generalized the spike. Having highlighted many of the fine scale physical events that accompany a neuron’s firing, like temperature and volume changes, optical effects, displacement, and myriad nonsynaptic effects, Fields demonstrated the intimate knitting of reverse propagating spikes into the behavior and function of neuronal networks. He also showed how spikes directly control non-neuronal events, in particular, myelination.
The Eyewire project at MIT is a fantastic effort to create detailed neuronal maps—it expands neuroscience to the larger community, and generates much worthwhile scientific spin-off. It is also completely absurd. To have so much talk about brain maps without drawing clear distinction between the glaring contrast in the value of white matter maps and grey matter maps is telling. Maps of the white matter will be indespensible to understanding our own brains. They are highly personal, yet at the same time will be one of the most valuable things we might soon come to share. For the moment here, we can liken them to the subway or transportation map of a complex city.
To try and map the grey matter, at least in our foreseeable era, is to attempt to record the comings and goings of all the people entering and exiting the doors of the trains of our subway system. Not only is the task infinitely harder, pound for pound, it is equally less valuable, and impermanent. Looked at another way, if we imagine some hyper-detailed ecologist mapping the different trees in a forest, one valuable piece of information to have would be the tree species or type. Their age, size, density and distribution would similarly be worthwhile parameters. Also maybe some detail about their finer structure would be predictive of what kind of animals species might live and move about their arbors. Eyewire, on the other hand, is mapping every twig down to the finest termination as a leaf. The problem is that leaves are shed and regenerated anew each year, and while Eyewire might map a few neurons in the same time, synapses morph to a faster drum.
The point of Field’s article is that glial trees have exactly the same level of detail and importance as neural trees, yet they are ignored in the aspirations of the connectomists. In fact, if neurons are like deciduous tress, with long, unpredictable, idiosyncratic and internexed branches, then glial cells, particularly astrocytes, are very much like conifers—they rigidly span nonoverlapping domains in the grey matter, in prototypical, scaffolded form, and with frequently symmetric repeatable structure. If we accept the results of neuroanatomy at face value here, grey matter might be imagined more like an astrocytic christmas tree farm superimposed on a neural rainforest. Stepping back, if given a choice between a grey matter connectome, and a white matter myelome, the latter is undoubtedly where the focus should be for now.
It may be a misstep in our study of glial cells to narrow-mindedly attempt to define for them, only that which has already been defined for neurons. The literature consists largely of a reattribution of transmitter or other chemical mechanisms of neurons to glia. The exceptioned qualifier here is that the speed of these processes—their electricality, directionality and extreme spatial aspect—is not a general feature of glial cells. For glial cells, new mechanisms need to be explored, and the most obvious among them perhaps, is that many of them, particularly the microglial cells, like to move.
It is increasingly appreciated nowadays, that much of the 10 or so watts attributed to the brain for its power budget, is purposed for things other then sending spikes and maintaining static electrical potentials. In the home, we can save on energy by dimming the lights, but to really make a dent, we need to turn off the things that move—things like fans, or the pumps in the HVAC systems. Much of the actual flow and motion inside the cerebral hive is transduced through glial cells. Undoubtedly axons drag diluent down their extent as they transport organelles across improbably expanses, and expel pressurized boluses of irritant (there may in fact be much to be said for an analogy with leaves powering fluid conduction in trees through local evaporation). It is however, the glial cells that seem to be the heavy lifters involved in flow. Transducing hand-picked intracellular flow, and bulk extracellular flow, sourced from the vasculature to neurons, they complete the so-called glymphatic circuit.
To be strict, perhaps we need to refigure this estimate of 10 watts, expanding it to include non-chemical sources, like the input of hydraulic power into the brain via the heart. If, for example, the brain consumes 20% of the flow from the heart, it also dissipates around 20% of the 100 or more watts of power generated by the heart. That should in fact be a significant contribution. By some estimates, we may have around 100,000 miles of myelinated axons in our brains, all surrounded by glial cells. Similarly, we may have the same amount, 100,000 miles, of capillary in the brain, all surrounded by astrocytic endfeet. Considering the scale of these numbers, it may be useful to start to look at the brain as more of a fluid-transporting machine, as opposed to mainly an electrical device.
The evidence is fairly clear that at the sensory and motor levels, spikes conduct much of the information about a stimulus or movement, particularly the short time scale components of that information. In moving more centrally from both sensory and motor ends, spikes tend to unhinge from real world metrics. If we are not careful to consider what neurons might actually be doing at a more global, physiologic level when they generate and propagate spikes, we may find that while we believe we are recording signals, we are actually just recording the noise of the pumps.

Brain’s flexible hub network helps humans adapt
Switching stations route processing of novel cognitive tasks
One thing that sets humans apart from other animals is our ability to intelligently and rapidly adapt to a wide variety of new challenges — using skills learned in much different contexts to inform and guide the handling of any new task at hand.
Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex — parts of the brain most changed evolutionarily since our common ancestor with chimpanzees — contains “flexible hubs” that coordinate the brain’s responses to novel cognitive challenges.
Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds.
“Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks — like a large Internet traffic router,” suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience.
“Like an Internet router, flexible hubs shift which networks they communicate with based on instructions for the task at hand and can do so even for tasks never performed before,” he adds.
Decades of brain research has built a consensus understanding of the brain as an interconnected network of as many as 300 distinct regional brain structures, each with its own specialized cognitive functions.
Binding these processing areas together is a web of about a dozen major networks, each serving as the brain’s means for implementing distinct task functions — i.e. auditory, visual, tactile, memory, attention and motor processes.
It was already known that fronto-parietal brain regions form a network that is most active during novel or non-routine tasks, but it was unknown how this network’s activity might help implement tasks.
This study proposes and provides strong evidence for a “flexible hub” theory of brain function in which the fronto-parietal network is composed of flexible hubs that help to organize and coordinate processing among the other specialized networks.
This study provide strong support for the flexible hub theory in two key areas.
First, the study yielded new evidence that when novel tasks are processed flexible hubs within the fronto-parietal network make multiple, rapidly shifting connections with specialized processing areas scattered throughout the brain.
Second, by closely analyzing activity patterns as the flexible hubs connect with various brain regions during the processing of specific tasks, researchers determined that these connection patterns include telltale characteristics that can be decoded and used to identify which specific task is being implemented by the brain.
These unique patterns of connection — like the distinct strand patterns of a spider web — appear to be the brain’s mechanism for the coding and transfer of specific processing skills, the study suggests.
By tracking where and when these unique connection patterns occur in the brain, researchers were able to document flexible hubs’ role in shifting previously learned and practiced problem-solving skills and protocols to novel task performance. Known as compositional coding, the process allows skills learned in one context to be re-packaged and re-used in other applications, thus shortening the learning curve for novel tasks.
What’s more, by tracking the testing performance of individual study participants, the team demonstrated that the transfer of these processing skills helped participants speed their mastery of novel tasks, essentially using previously practiced processing tricks to get up to speed much more quickly for similar challenges in a novel setting.
“The flexible hub theory suggests this is possible because flexible hubs build up a repertoire of task component connectivity patterns that are highly practiced and can be reused in novel combinations in situations requiring high adaptivity,” Cole explains.
“It’s as if a conductor practiced short sound sequences with each section of an orchestra separately, then on the day of the performance began gesturing to some sections to play back what they learned, creating a new song that has never been played or heard before.”
By improving our understanding of cognitive processes behind the brain’s handling of novel situations, the flexible hub theory may one day help us improve the way we respond to the challenges of everyday life, such as when learning to use new technology, Cole suggests.
“Additionally, there is evidence building that flexible hubs in the fronto-parietal network are compromised for individuals suffering from a variety of mental disorders, reducing the ability to effectively self-regulate and therefore exacerbating symptoms,” he says.
Future research may provide the means to enhance flexible hubs in ways that would allow people to increase self-regulation and reduce symptoms in a variety of mental disorders, such as depression, schizophrenia and obsessive-compulsive disorder.
NIH-funded scientists show new genetically engineered proteins may be important tool for the President’s BRAIN Initiative

Scientists used fruit flies to show for the first time that a new class of genetically engineered proteins can be used to watch electrical activity in individual brain cells in live brains. The results, published in Cell, suggest these proteins may be a promising new tool for mapping brain cell activity in multiple animals and for studying how neurological disorders disrupt normal nerve cell signaling. Understanding brain cell activity is a high priority of the President’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.
Brain cells use electricity to control thoughts, movements and senses. Ever since the late nineteenth century, when Dr. Luigi Galvani induced frog legs to move with electric shocks, scientists have been trying to watch nerve cell electricity to understand how it is involved in these actions. Usually they directly monitor electricity with cumbersome electrodes or toxic voltage-sensitive dyes, or indirectly with calcium detectors. This study, led by Michael Nitabach, Ph.D., J.D., and Vincent Pieribone, Ph.D., at the Yale School of Medicine, New Haven, CT, shows that a class of proteins, called genetically encoded fluorescent voltage indicators (GEVIs), may allow researchers to watch nerve cell electricity in a live animal.
Dr. Pieribone and his colleagues helped develop ArcLight, the protein used in this study. ArcLight fluoresces, or glows, as a nerve cell’s voltage changes and enables researchers to watch, in real time, the cell’s electrical activity. In this study, Dr. Nitabach and his colleagues engineered fruit flies to express ArcLight in brain cells that control the fly’s sleeping cycle or sense of smell. Initial experiments in which the researchers simultaneously watched brain cell electricity with a microscope and recorded voltage with electrodes showed that ArcLight can accurately monitor electricity in a living brain. Further experiments showed that ArcLight illuminated electricity in parts of the brain that were previously inaccessible using other techniques. Finally, ArcLight allowed the researchers to watch brain cells spark and fire while the flies were awakening and smelling. These results suggest that in the future neuroscientists may be able to use ArcLight and similar GEVIs in a variety of ways to map brain cell circuit activity during normal and disease states.
(Source: ninds.nih.gov)
Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.
A boost in the speed of brain scans is unveiling new insights into how brain regions work with each other in cooperative groups called networks.
Scientists at Washington University School of Medicine in St. Louis and the Institute of Technology and Advanced Biomedical Imaging at the University of Chieti, Italy, used the quicker scans to track brain activity in volunteers at rest and while they watched a movie.
“Brain activity occurs in waves that repeat as slowly as once every 10 seconds or as rapidly as once every 50 milliseconds,” said senior researcher Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology. “This is our first look at these networks where we could sample activity every 50 milliseconds, as well as track slower activity fluctuations that are more similar to those observed with functional magnetic resonance imaging (fMRI). This analysis performed at rest and while watching a movie provides some interesting and novel insights into how these networks are configured in resting and active brains.”
Understanding how brain networks function is important for better diagnosis and treatment of brain injuries, according to Corbetta.
The study appears online in Neuron.
Researchers know of several resting-state brain networks, which are groups of different brain regions whose activity levels rise and fall in sync when the brain is at rest. Scientists used fMRI to locate and characterize these networks, but the relative slowness of this approach limited their observations to activity that changes every 10 seconds or so. A surprising result from fMRI was that the spatial pattern of activity (or topography) of these brain networks is similar at rest and during tasks.
In contrast, a faster technology called magnetoencephalography (MEG) can detect activity at the millisecond level, letting scientists examine waves of activity in frequencies from slow (0.1-4 cycles per second) to fast (greater than 50 cycles per second).
“Interestingly, even when we looked at much higher temporal resolution, brain networks appear to fluctuate on a relatively slow time scale,” said first author Viviana Betti, PhD, a postdoctoral researcher at Chieti. “However, when the subjects went from resting to watching a movie, the networks appeared to shift the frequency channels in which they operate, suggesting that the brain uses different frequencies for rest and task, much like a radio.”
In the study, the scientists asked one group of volunteers to either rest or watch the movie during brain scans. A second group was asked to watch the movie and look for event boundaries, moments when the plot or characters or other elements of the story changed. They pushed a button when they noticed these changes.
As in previous studies, most subjects recognized similar event boundaries in the movie. The MEG scans showed that the communication between regions in the visual cortex was altered near the movie boundaries, especially in networks in the visual cortex.
“This gives us a hint of how cognitive activity dynamically changes the resting-state networks,” Corbetta said. “Activity locks and unlocks in these networks depending on how the task unfolds. Future studies will need to track resting-state networks in different tasks to see how correlated activity is dynamically coordinated across the brain.”
(Source: news.wustl.edu)
Neural Simulations Hint at the Origin of Brain Waves
At EPFL’s Blue Brain facilities, computer models of individual neurons are being assembled into neural circuits that produce electrical signals akin to brain waves. The results, published in the journal Neuron, are helping solve the mystery of how and why these signals arise in the brain.
For almost a century, scientists have been studying brain waves to learn about mental health and the way we think. Yet the way billions of interconnected neurons work together to produce brain waves remains unknown. Now, scientists from EPFL’s Blue Brain Project in Switzerland, at the core of the European Human Brain Project, and the Allen Institute for Brain Science in the United States, show in the July 24th edition of the journal Neuron how a complex computer model is providing a new tool to solve the mystery.
The brain is composed of many different types of neurons, each of which carry electrical signals. Electrodes placed on the head or directly in brain tissue allow scientists to monitor the cumulative effect of this electrical activity, called electroencephalography (EEG) signals. But what is it about the structure and function of each and every neuron, and the way they network together, that give rise to these electrical signals measured in a mammalian brain?
Modeling Brain Circuitry
The Blue Brain Project is working to model a complete human brain. For the moment, Blue Brain scientists study rodent brain tissue and characterize different types of neurons to excruciating detail, recording their electrical properties, shapes, sizes, and how they connect.
To answer the question of brain-wave origin, researchers at EPFL’s Blue Brain Project and the Allen Institute joined forces with the help of the Blue Brain modeling facilities. Their work is based on a computer model of a neural circuit the likes of which have never been seen before, encompassing an unprecedented amount of detail and simulating 12,000 neurons.
“It is the first time that a model of this complexity has been used to study the underlying properties of brain waves,” says EPFL scientist Sean Hill.
In observing their model, the researchers noticed that the electrical activity swirling through the entire system was reminiscent of brain waves measured in rodents. Because the computer model uses an overwhelming amount of physical, chemical and biological data, the supercomputer simulation allows scientists to analyze brain waves at a level of detail simply unattainable with traditional monitoring of live brain tissue.
“We need a computer model because it is impossible to relate the electrical activity of potentially billions of individual neurons and the resulting brain waves at the same time,” says Hill. “Through this view, we’re able to provide an interpretation, at the single-neuron level, of brain waves that are measured when tissue is actually probed in the lab.”
Finding brain wave analogs
Neurons are somewhat like tiny batteries, needing to be charged in order to fire off an electrical impulse known as a “spike”. It is through these “spikes” that neurons communicate with each other to produce thought and perception. To “recharge” a neuron, charged particles called ions must travel through miniscule ionic channels. These channels are like gates that regulate electrical current. Ultimately, the accumulation of multiple electrical signals throughout the entire circuit of neurons produces brain waves.
The challenge for scientists in this study was to incorporate into the simulation the thousands of parameters, per neuron, that describe these electrical properties. Once they did that, they saw that the overall electrical activity in their model of 12,000 neurons was akin to observations of brain activity in rodents, hinting at the origin of brain waves.
“Our model is still incomplete, but the electrical signals produced by the computer simulation and what was actually measured in the rat brain have some striking similarities,” says Allen Institute scientist Costas Anastassiou.
Hill adds, “For the first time, we show that the complex behavior of ion channels on the branches of the neurons contributes to the shape of brain waves.”
There is still much work to be done in order to arrive at a complete simulation. While the model’s electrical signals are analogous to in vivo measurements, researchers warn that there are still many open questions as well as room to improve the model. For instance, the simulation is modeled on neurons that control the hind-limb, while in vivo data represent brain waves coming from neurons that have a similar function but control whiskers instead.
“Even so, the computer model we used allowed us to characterize, and more importantly quantify, key features of how neurons produce these signals,” says Anastassiou.
The scientists are currently studying similar brain wave phenomena in larger and more realistic neural circuits.
This computer model is drawing cellular biophysics and cognitive neuroscience closer together, in order to achieve the same goal: understanding the brain. But the two disciplines share neither the methods nor the scientific language. By simulating electrical brain activity and relating the behavior of single neurons to brain waves, the researchers aim to bridge this gap, opening the way to better tools for diagnosing mental disorders, and on a deeper level, offering a better understanding of ourselves.
Ultrasensitive Calcium Sensors Shine New Light on Neuron Activity
A new protein engineered by scientists at the Janelia Farm Research Campus fluoresces brightly each time it senses calcium, giving the scientists a way to visualize neuronal activity. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse.
Every time you say a word, take a step, or read a sentence, a collection of neurons sends a speedy relay of messages throughout your brain to process the information. Now, researchers have a new way of watching those messages in action, by watching each cell in the chain light up when it fires.
When a neuron receives a signal from one of its neighbors, the impulse sets off a sudden series of electrochemical events geared toward passing the message along. Among the first events: calcium ions rush into the neurons when a set of channels opens. Scientists at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have engineered a new protein that brightly fluoresces each time it senses these calcium waves, giving the scientists a way to visualize the activity of every neuron throughout the brain. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse, rather than just a portion. The results are reported in the July 18, 2013 issue of the journal Nature.
“You can think of the brain as an orchestra with each different neuron type playing a different part,” says Janelia lab head Karel Svoboda, a neurobiologist and member of the team that developed the new sensor. “Previous methods only let us hear a tiny fraction of the melodies. Now we can hear more of the symphony at once. Improving the molecule and imaging methods in the future could allow us to hear the entire symphony.”
Detecting which neurons in the brain are firing, and when, is a key step in learning which areas of the brain are linked to particular activities or disorders, how memories are formed, how behaviors are learned, and basic questions about how the brain organizes neurons and stores information in this organization.
Two decades ago, scientists who wanted to use calcium to pinpoint neural activity relied on synthetic calcium-indicator dyes, first developed by HHMI Investigator Roger Tsien. The dyes lit up when neurons fired, but were difficult to inject and highly toxic—an animal’s brain could only be imaged once using the dyes.
In 1997, researchers led by Tsien developed the first genetically encoded calcium indicator (GECI). GECIs were made by combining a gene for a calcium sensor with the gene for a fluorescent protein in a way that made the calcium sensor fluoresce when it bound calcium. The GECI genes could be integrated into the genomes of model organisms like mice or flies so that no dye injection was necessary. The animals’ own brain cells would produce the proteins throughout their lives, and brain activity could be studied again and again in any one animal, allowing long-term studies of processes like learning and development. But GECIs weren’t as accurate as the cumbersome dyes had been, and improving them was a slow process.
“New versions were developed in a very piecemeal way,” says Svoboda, explaining that after chemists developed the sensors, it might be years before biologists had an opportunity to test them in the brains of living animals. “It was a very slow process of getting feedback.”
Svoboda, along with Janelia lab heads Loren Looger, Vivek Jayaraman and Rex Kerr formed the Genetically Encoded Neural Indicator and Effector (GENIE) project at Janelia to speed up the innovation. The GENIE project, led by Douglas Kim, an HHMI program scientist, is one of several collaborative team projects online at Janelia. The project developed a higher-throughput and more accurate way of testing new variants of the best-working GECI, called GCaMP. Steps included simple tests that could easily be performed on many proteins at once, like measuring how much fluorescence the protein gave off when exposed to calcium in a cuvette, as well as early tests of function in different types of neurons and final experiments in genetically engineered mice, flies, and zebrafish.
“When people developed previous GECIs, they would test somewhere between ten and twenty variants very carefully. We were able to screen a thousand in a highly quantitative neuronal assay,” Looger says. “And when you can look at that many constructs, you’re going to make better and more interesting observations on what makes the ideal sensor.”
The team made successive rounds of tweaks to the structure of the GCaMP so that it accurately sensed calcium, shone brightly in response, and worked in model organisms. After that work they settled upon a version of the sensor that performed better in all aspects than previous GECIs. Their new sensor, dubbed GCaMP6, produced signals seven times stronger than past versions. Surprisingly, its sensitivity even outperformed synthetic dyes.
“People had assumed that the synthetic dyes were letting us see every event in neurons,” says Looger. “But we’ve now shown that not only are these dyes hard to load and quite toxic, but they weren’t even recording every event.”
GCaMP6 will be a boon to researchers at Janelia, and around the world, who want to get a full picture of the activity of every neuron in the brain. Meanwhile, the team plans to continue to continue to improve it, developing entirely new versions for specific uses. For example, they hope to make a GECI that gives off red fluorescence rather than green, because red is easier to see in deeper tissues.
“One of the stated goals of Janelia Farm is to develop an atlas of every neuron in the Drosophila brain,” says Looger. “The most practical way I can think of to assign functions to such an atlas is with calcium sensors. With this new sensor, I think people will feel much more comfortable that they’re really getting all the information they can.”
A fundamental problem for brain mapping
Recent findings force scientists to rethink the rules of neuroimaging
Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.
Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong
“It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”
Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.
“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”
By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.
Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.
Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.
Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.
“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”
Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.
Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”
The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”
A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.
“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.
For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.
“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”
Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.
For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.
“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”
The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.
“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”
Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”
“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”
Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.
The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.
Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.
Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”
(Image: Saad Faruque, Flickr)
How the brain creates the ‘buzz’ that helps ideas spread
How do ideas spread? What messages will go viral on social media, and can this be predicted?
UCLA psychologists have taken a significant step toward answering these questions, identifying for the first time the brain regions associated with the successful spread of ideas, often called “buzz.”
The research has a broad range of implications, the study authors say, and could lead to more effective public health campaigns, more persuasive advertisements and better ways for teachers to communicate with students.
"Our study suggests that people are regularly attuned to how the things they’re seeing will be useful and interesting, not just to themselves but to other people," said the study’s senior author, Matthew Lieberman, a UCLA professor of psychology and of psychiatry and biobehavioral sciences and author of the forthcoming book "Social: Why Our Brains Are Wired to Connect." "We always seem to be on the lookout for who else will find this helpful, amusing or interesting, and our brain data are showing evidence of that. At the first encounter with information, people are already using the brain network involved in thinking about how this can be interesting to other people. We’re wired to want to share information with other people. I think that is a profound statement about the social nature of our minds."
The study findings are published in the online edition of the journal Psychological Science, with print publication to follow later this summer.
"Before this study, we didn’t know what brain regions were associated with ideas that become contagious, and we didn’t know what regions were associated with being an effective communicator of ideas," said lead author Emily Falk, who conducted the research as a UCLA doctoral student in Lieberman’s lab and is currently a faculty member at the University of Pennsylvania’s Annenberg School for Communication. "Now we have mapped the brain regions associated with ideas that are likely to be contagious and are associated with being a good ‘idea salesperson.’ In the future, we would like to be able to use these brain maps to forecast what ideas are likely to be successful and who is likely to be effective at spreading them."
In the first part of the study, 19 UCLA students (average age 21), underwent functional magnetic resonance imaging (fMRI) brain scans at UCLA’s Ahmanson–Lovelace Brain Mapping Center as they saw and heard information about 24 potential television pilot ideas. Among the fictitious pilots — which were presented by a separate group of students — were a show about former beauty-queen mothers who want their daughters to follow in their footsteps; a Spanish soap opera about a young woman and her relationships; a reality show in which contestants travel to countries with harsh environments; a program about teenage vampires and werewolves; and a show about best friends and rivals in a crime family.
The students exposed to these TV pilot ideas were asked to envision themselves as television studio interns who would decide whether or not they would recommend each idea to their “producers.” These students made videotaped assessments of each pilot.
Another group of 79 UCLA undergraduates (average age 21) was asked to act as the “producers.” These students watched the interns’ videos assessments of the pilots and then made their own ratings about the pilot ideas based on those assessments.
Lieberman and Falk wanted to learn which brain regions were activated when the interns were first exposed to information they would later pass on to others.
"We’re constantly being exposed to information on Facebook, Twitter and so on," said Lieberman. "Some of it we pass on, and a lot of it we don’t. Is there something that happens in the moment we first see it — maybe before we even realize we might pass it on — that is different for those things that we will pass on successfully versus those that we won’t?"
It turns out, there is. The psychologists found that the interns who were especially good at persuading the producers showed significantly more activation in a brain region known as the temporoparietal junction, or TPJ, at the time they were first exposed to the pilot ideas they would later recommend. They had more activation in this region than the interns who were less persuasive and more activation than they themselves had when exposed to pilot ideas they didn’t like. The psychologists call this the “salesperson effect.”
"It was the only region in the brain that showed this effect," Lieberman said. One might have thought brain regions associated with memory would show more activation, but that was not the case, he said.
"We wanted to explore what differentiates ideas that bomb from ideas that go viral," Falk said. "We found that increased activity in the TPJ was associated with an increased ability to convince others to get on board with their favorite ideas. Nobody had looked before at which brain regions are associated with the successful spread of ideas. You might expect people to be most enthusiastic and opinionated about ideas that they themselves are excited about, but our research suggests that’s not the whole story. Thinking about what appeals to others may be even more important."
The TPJ, located on the outer surface of the brain, is part of what is known as the brain’s “mentalizing network,” which is involved in thinking about what other people think and feel. The network also includes the dorsomedial prefrontal cortex, located in the middle of the brain.
"When we read fiction or watch a movie, we’re entering the minds of the characters — that’s mentalizing," Lieberman said. "As soon as you hear a good joke, you think, ‘Who can I tell this to and who can’t I tell?’ Making this judgment will activate these two brain regions. If we’re playing poker and I’m trying to figure out if you’re bluffing, that’s going to invoke this network. And when I see someone on Capitol Hill testifying and I’m thinking whether they are lying or telling the truth, that’s going to invoke these two brain regions.
"Good ideas turn on the mentalizing system," he said. "They make us want to tell other people."
The interns who showed more activity in their mentalizing system when they saw the pilots they intended to recommend were then more successful in convincing the producers to also recommend those pilots, the psychologists found.
"As I’m looking at an idea, I might be thinking about what other people are likely to value, and that might make me a better idea salesperson later," Falk said.
By further studying the neural activity in these brain regions to see what information and ideas activate these regions more, psychologists potentially could predict which advertisements are most likely to spread and go viral and which will be most effective, Lieberman and Falk said.
Such knowledge could also benefit public health campaigns aimed at everything from reducing risky behaviors among teenagers to combating cancer, smoking and obesity.
"The explosion of new communication technologies, combined with novel analytic tools, promises to dramatically expand our understanding of how ideas spread," Falk said. "We’re laying basic science foundations to addressimportant public health questions that are difficult to answer otherwise — about what makes campaigns successful and how we can improve their impact."
As we may like particular radio DJs who play music we enjoy, the Internet has led us to act as “information DJs” who share things that we think will be of interest to people in our networks, Lieberman said.
"What is new about our study is the finding that the mentalizing network is involved when I read something and decide who else might be interested in it," he said. "This is similar to what an advertiser has to do. It’s not enough to have a product that people should like."