Posts tagged neural activity

Posts tagged neural activity
Bat and Rat Brain Rhythms Differ When on the Move
To get a clear picture of how humans and other mammals form memories and find their way through their surroundings, neuroscientists must pay more attention to a broad range of animals rather than focus on a single model species, say two University of Maryland researchers, Katrina MacLeod and Cynthia Moss. Their new comparative study of bats and rats reports differences between the species that suggest the need to revise models of spatial navigation.
In a paper appearing in the April 19, 2013 issue of Science, the UMD researchers and two colleagues at Boston University reported significant differences between rats’ and bats’ brain rhythms when certain cells were active in a part of the brain used in memory and navigation.
These cells behaved as expected in rats, which mostly move along surfaces. But in bats, which fly, the continuous brain rhythm did not appear, said Moss, a professor in Psychology and Biology and the Institute for Systems Research.
The finding suggests that even though rats, bats, humans and other mammals share a common neural representation of space in a part of the brain that has been linked to spatial information and memory, they may have different cellular mechanisms to create or interpret those maps, said MacLeod, an assistant research scientist in Biology.
“To understand brains, including ours, we really must study neural activity in a variety of animals,” MacLeod said. “Common features across multiple species tell us ‘Aha, this is important,’ but differences can occur because of variances in the animals’ ecology, behavior, or evolutionary history.”
The research team focused on a brain region that contains specialized “grid cells,” so named because they form a hexagonal grid of activity related to the animal’s location as it navigates through space. This brain region, the medial entorhinal cortex, sits next to the hippocampus, the place that, in humans, forms memories of events such as where a car is parked. The medial entorhinal cortex acts as a hub of neural networks for memory and navigation.
Grid cells were first noticed in rats navigating their environment, but recent work by Nachum Ulanovsky (Moss’s former postdoctoral researcher at UMD) and his research team at the Weizmann Institute in Rehovot, Israel, has shown these cells exist in bats as well.
In rats, grid cells fire in a pattern called a theta wave when the animals spatially navigate. Theta waves are fairly low-frequency electrical oscillations that also have been observed at the cellular level in the medial entorhinal cortex. The prominence of theta waves in rats suggested they were important. As a result, neuroscientists, trying to understand the relationship between theta waves and grid cells, have developed models of the brain based on the assumption that theta waves are key to spatial navigation in mammals.
However, Moss said, “recordings from the brains of bats navigating in space contain a surprise, because the expected theta rhythms aren’t continuously present as they are in the rodent.”
The new Science study doubles down on the lack of theta in bats by reporting that theta rhythms also are not present at the cellular level. “The bat neurons don’t ‘ring’ the way the rat neurons do,” says MacLeod. “This raises a lots of questions as to whether theta rhythms are actually doing what the spatial navigation theory proposes in rats or even humans.”

Mutations found in individuals with autism interfere with endocannabinoid signaling in the brain
Mutations found in individuals with autism block the action of molecules made by the brain that act on the same receptors that marijuana’s active chemical acts on, according to new research reported online April 11 in the Cell Press journal Neuron. The findings implicate specific molecules, called endocannabinoids, in the development of some autism cases and point to potential treatment strategies.
"Endocannabinoids are molecules that are critical regulators of normal neuronal activity and are important for many brain functions," says first author Dr. Csaba Földy, of Stanford University Medical School. "By conducting studies in mice, we found that neuroligin-3, a protein that is mutated in some individuals with autism, is important for relaying endocannabinoid signals that tone down communication between neurons."
When the researchers introduced different autism-associated mutations in neuroligin-3 into mice, this signaling was blocked and the overall excitability of the brain was changed.
"These findings point out an unexpected link between a protein implicated in autism and a signaling system that previously had not been considered to be particularly important for autism," says senior author Dr. Thomas Südhof, also of Stanford. "Thus, the findings open up a new area of research and may suggest novel strategies for understanding the underlying causes of complex brain disorders."
The results also indicate that targeting components of the endocannabinoid signaling system may help reverse autism symptoms.
The study’s findings resulted from a research collaboration between the Stanford laboratories of Dr. Südhof and Dr. Robert Malenka, who is also an author on the paper.

How ‘free will’ is implemented in the brain and is it possible to intervene in the process?
Researchers have been able to identify the precise moment when a network of nerve cells (neurons) in the brain creates the signal to perform an action, before a person is even aware of deciding to take that action. Now they are building on this work to make initial attempts to interfere with consciously made decisions by decoding the pattern of brain activity in real time before an action is taken.
Professor Gabriel Kreiman will tell the British Neuroscience Association Festival of Neuroscience (BNA2013) today (Tuesday): “This could be useful to help elucidate the mechanistic basis by which neuronal circuits orchestrate ‘free’ will.”
Normally it is difficult to research the activity of neurons in the brain because it involves implanting electrodes – an invasive procedure that would not be ethical to do simply for scientific curiosity alone. However, Prof Kreiman, who is an associate professor at the Harvard Medical School, Boston, USA, together with neurosurgeon Itzhak Fried from University of California at Los Angeles (UCLA), had a rare opportunity to record the activity of over 1,000 neurons in two areas of the brain, the frontal and temporal lobes, when patients with epilepsy had had electrodes implanted to try to identify the source of their seizures.
“These patients have epilepsy that does not respond to drug treatment; Itzhak Fried implanted their brains with very thin electrodes (microwires) of about 40 micrometres in diameter in order to localise the focus of a seizure onset for a potential surgical procedure to alleviate the seizures. The microwires capture the extracellular electrical activity of neurons. Patients stay in the hospital for about a week. During this time, we have a unique opportunity to interrogate the activity of neurons and neural ensembles in the human brain at high spatial and temporal resolution,” explains Prof Kreiman.
The researchers asked the patients to move their index finger to click a computer mouse and to report when they made that decision. “Based on the activity of small groups of neurons, we could predict this decision several hundreds of milliseconds and, in some cases, seconds before the action. In a variant of the main experiment, the patients were allowed to choose whether to use their left hand or right hand and we showed that we could also predict this decision.”
The researchers found that an increasing number of neurons in two specific brain regions started to become active before the person was aware of their decision to move their finger. The two regions were the supplementary motor area, which is thought to be the area for preparing to perform motor actions, and the anterior cingulate cortex, which has a number of roles including the signalling processes associated with reward.
Prof Kreiman believes that these results provide initial steps to elucidate the mechanism for the emergence of conscious will in humans. “The activity of multiple neurons in extremely simple neural circuits precedes volition – in this case the decision to make a simple movement – until a threshold is crossed and the action is taken,” he will say.
Knowing when this threshold will be reached could enable researchers to see whether it is possible to interfere and maybe change the decision before any action is taken. “We are now making initial attempts to interfere with volition by decoding the neural responses in real time and asking whether there is a ‘point of no return’ in the hierarchical chain of command from unconscious decisions to volition to action,” says Prof Kreiman.
How these findings fit into the concept of “free will” is more complicated. “The concept of free will has been debated for millennia. Ultimately, current scientific understanding strongly suggests that ‘will’ has to be orchestrated by neurons in our brains (as opposed to magic or religious beliefs or other notions). We have provided initial steps to try to disentangle which neurons are involved, to show where and how ‘will’ or ‘volition’ could be implemented in the brain.
“Our work does not say that life is predetermined, that we can predict the future and that we can, for instance, determine what you are going to eat for lunch two weeks from now, or who you are going to marry.
“We are saying that volition (like other aspects of consciousness) is a brain phenomenon that is instantiated by physical hardware, i.e. neurons. We are making claims about volition for very simple tasks, such as moving an index finger or choosing which hand to use, over scales of hundreds of milliseconds to seconds. Nothing more. Nothing less.
“Ultimately, our actions depend on multiple variables, several of which are external (for instance, it rains, hence, I will take my umbrella) and cannot be decoded or predicted from neurons. However, our volitional decision of whether to take the red umbrella or the blue one today – ultimately perhaps the real core of free will – is dictated by neurons,” Prof Kreiman will conclude.
Non-Invasive Brain-to-Brain Interface (BBI): Establishing Functional Links between Two Brains
Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer’s intention to stimulate a rat’s brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer’s intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.
Scientists Decode Dreams With Brain Scans
It used to be that what happened in your dreams was your own little secret. But today scientists report for the first time that they’ve successfully decoded details of people’s dreams using brain scans.
Before you reach for your tin hat, you should know that the scientists managed this feat only with the full cooperation of their research subjects, and they only decoded dreams after the fact, not in real time. The thought police won’t be busting you for renting bowling shoes from Saddam Hussein or whatever else you’ve been up to in your dreams.
All the same, the work is yet another impressive step for researchers interested in decoding mental states from brain activity, and it opens the door to a new way of studying dreaming, one of the most mysterious and fascinating aspects of the human experience.
In the first part of the new study, neuroscientist Yukiyasu Kamitani and colleagues at the Advanced Telecommunications Research Institute International in Kyoto, Japan monitored three young men as they tried to get some sleep inside an fMRI scanner while the machine monitored their brain activity. The researchers also monitored each volunteer’s brain activity with EEG electrodes, and when they saw an EEG signature indicative of dreaming, they woke him up to ask what he’d been dreaming about.
Technically speaking, this is what researchers call ”hypnagogic imagery,” the dream-like state that occurs as people fall asleep. In the interest of saving time, Kamitani and colleagues chose to study this type of imagery rather than the dreams that tend to occur during REM sleep later in the night. They woke up each subject at least 200 times over the course of several days to build up a database of dream reports.
In the second part of the experiment, Kamitani and colleagues developed a visual imagery decoder based on machine learning algorithms. They trained the decoder to classify patterns of brain activity recorded from the same three men while they were awake and watching a video montage of hundreds of images selected from several online databases. After the decoder for each person had been trained, the researchers could input a pattern of brain activity and have the decoder predict which image was most likely to have produced that pattern of brain activity.
But that much has been done before. Where Kamitani’s team went beyond previous work was in feeding the decoder patterns of brain activity collected while the subjects were dreaming. This enabled them to correctly identify objects the men had seen in their dreams, they report Apr. 4 in Science. Or rather, they could identify the type of object a subject had seen: it could predict that a man had dreamt about a car, not that he’d been cruising around in a Maserati. And the decoder only worked when the researchers gave it a pair of possible objects to chose from (whether it was a man or a chair, for example).
“Our dream decoding is still very primitive,” Kamitani said.
Decoding color, action, or emotion is also still beyond the scope of the technology, Kamitani says. Also, it only seems to work for imagery that occurred — at most — about 15 seconds before waking up.
Finally, the decoder is unique to each person. To decode the dreams of another person, the team would have to train up a new decoder by having that person view hundreds of images.
Even so, it’s remarkable that it works as well as it does, says neuroscientist Jack Gallant of the University of California, Berkeley and a pioneer of decoding mental states from brain scans. ”It took just a huge amount of non-glamorous work to do this, and they deserve big props for that,” Gallant said.
With refinements, Gallant says the method could be useful for studying the nature and function of dreams.
“There’s the classic question of when you dream are you actively generating these movies in your head, or is it that when you wake up you’re essentially confabulating it,” Gallant said. “What this shows you is there’s at least some correspondence between what the brain is doing during dreaming and what it’s doing when you’re awake.”
Kamitani is thinking about the possibilities too. ”One theory states that dreaming is for strengthening memory, but another theory states dreaming is for forgetting,” he said. “We could record the frequency of decoded dream contents for each memory item and see the correlation between the frequency and the memory performance.”
Intuition results from training
A game of Japanese chess reveals how experts develop their capacity for rapid problem-solving
The superior capability of experts to rapidly solve problems depends largely on their intuition, and it has long been known that this is related to experience and training. Although many psychological models relating to the development of intuition have been proposed to explain this phenomenon, none have been validated, and the underlying neural mechanisms remain a mystery.
Keiji Tanaka and colleagues from the Cognitive Brain Mapping Laboratory and Support Unit for Functional Magnetic Resonance Imaging at the RIKEN Brain Science Institute have now shown that activity in the basal ganglia of the brain, which is related to the automatic, rapid information processing or intuition characteristic of experts, develops during the course of training. The work provides a first insight into the neural response of the brain to extended training and hints at ways to improve the efficiency of training experts in industry.
In earlier work, another research team led by Tanaka showed that amateur players of the Japanese chess-like game of shogi plotted their best next-moves consciously using the human brain’s highly developed cerebral cortex. In contrast, they found that in professional players an important part of this process was unconscious or intuitive and had shifted to the head of the caudate nucleus in the basal ganglia, a much older part of the brain. This would leave the cortex free for higher-level strategy, the researchers suggested. Yet it remained unclear as to whether this shift of neural activity was entirely due to training, or dependent to some extent on pre-existing ability.
Tanaka’s most recent experiments involved training 20 novices for 15 weeks in mini-shogi, a simplified version of shogi. After about two weeks and again at the end of the 15-week program, the intuition of the volunteers was tested through their ability to come up with the best next-move to end-phase patterns of mini-shogi games. To ensure the answers were intuitive, each problem was presented for just two seconds and participants had to respond within three seconds. During this process, brain activity was recorded using functional magnetic resonance imaging (fMRI). The researchers found that activity in the caudate nucleus developed over the training period, whereas activity in the cortex remained unchanged.
“This work should open a fruitful interaction between the cognitive psychology of expertise development and biological studies of the basal ganglia,” says Tanaka. “We now would like to elucidate what computations the caudate nucleus conducts in generating the best next-move.”
Cynthia Thompson, a world-renowned researcher on stroke and brain damage, will discuss her groundbreaking research on aphasia and the neurolinguistic systems it affects Feb. 16 at the annual meeting of the American Association for the Advancement of Science (AAAS). An estimated one million Americans suffer from aphasia, affecting their ability to understand and/or produce spoken and/or written language.
For three decades, Thompson has played a crucial role in demonstrating the brain’s plasticity, or ability to change. “Not long ago, the conventional wisdom was that people only could recover language within three months to a year after the onset of stroke,” she says. “Today we know that, with appropriate training, patients can make gains as much as 10 years or more after a stroke.”
Thompson has probably contributed more findings on the effects of brain damage on language processing and the ways the brain and language recover from stroke than any other single researcher. Her particular interest is agrammatic aphasia, which impairs abstract knowledge of grammatical sentence structure and makes sentence production and understanding difficult.
Among the first researchers to use functional magnetic resonance imaging to study recovery from stroke, Thompson found that behavior treatment that focused on improving impaired language processing affects not only the ability to understand and produce language but also brain activity.
She found shifts in neural activity in both cerebral hemispheres associated with recovery, with the greatest recovery seen in undamaged brain regions within the language network engaged by healthy people, albeit regions recruited for various language activities.
"It’s a matter of ‘use it or lose it,’" Thompson says. "The brain has the capacity to learn and relearn throughout life, and it is directly affected by the activities we engage in. Language training that focuses on principles of normal language processing stimulates the recovery of neural networks that support language."
Thompson will discuss research she will conduct as principal investigator of a $12 million National Institutes of Health Clinical Research Center award to study biomarkers of recovery in aphasia.
Working with investigators from a number of universities, Thompson will explore the role blood flow plays in language recovery in chronic stroke patients. In addition, she will conduct cutting-edge, exploratory research using eye tracking to understand how people compute language as they hear it in real time. Eye-tracking techniques have been found to discern subtle problems underlying language deficits in acquired aphasia.
In a landmark 2010 study, she and colleagues discovered two critical variables related to understanding brain damage recovery. They found that stroke not only results in cell death in certain regions of the brain but that it also decreases blood flow (perfusion) to living cells that are adjacent (and sometimes even distant) to the lesion.
Until that study, hypoperfusion (diminished blood flow) was thought only to be associated with acute stroke. Her team also found that greater hypoperfusion led to poorer recovery.
(Source: eurekalert.org)
Secrets of lasting love are hidden inside the brain
Researchers have found that they can spot the signs of a true romance in people embarking on a new relationship by looking at how much their brains light up when they think about their new partner.
The scientists detected distinctive patterns of electrical activity in the brains of volunteers who believed they had recently fallen in love, and found that they could use the scans to predict whether a couple would stay together.
The findings could end the uncertainty of courting by revealing whether a couple are likely to have a long relationship or whether their feelings will fizzle out.
The scans showed that even if someone believed they had fallen in love, the activity of their neurons could suggest whether their feelings were strong enough for them to be with the other person three years later.
Prof Arthur Aron, a social psychologist at Stony Brook University in Long Island, New York, said: “All of those involved in the study felt very intensely in love with their partner and this was reflected in their scans, but there were some subtle indicators that showed how stable those feeling were.
“If that strong feeling was combined with signs that they could regulate emotions, to see the partner positively and deal with conflict, then it seems to be really productive in staying with the person.” The psychologists, whose research was published in the journal Neuroscience Letters, found a number of key parts of the brain were involved.
Using magnetic resonance imaging, the scientists scanned 12 volunteers, seven of whom were women, who had fallen passionately in love and had been with their partner for about a year. As they were scanned, each was shown a picture of their partner and asked to think of memories of them. The participants were also asked to think about and look at pictures of an acquaintance with whom they had no romantic attachment. Three years later, the researchers compared the scans with the outcome of each relationship. Half the relationships had lasted.
The scientists found that the scans of those who were still in relationships had heightened levels of activity, when thinking of their partner, in an area of the brain that produces emotional responses to visual beauty, known as the caudate tail.
These people also had lower levels of activity in the pleasure centres of the brain that relate to addiction and seeking rewards. The scientists say deactivation in this area has been linked to satiety and satisfaction.
Another part of the brain, known as the medial orbitofrontal cortex, was also less active, which the scientists say made those people less critical and judgmental about their partners.
Aron said the research could have a practical application in helping people having relationship problems.
He said: “The brain is so complex that we are still quite a way from being able to very precisely pick out these qualities, but it does allow us to get at what is really going on inside someone aside from what they tell us.
“We may eventually get to a point where we can recognise things that the person doesn’t recognise themselves and we can say that they are not as intensely attached to a person as they think they are.”
Prof Aron added: “This probably facilitates handling the conflicts that inevitably arise when you spend a lot of time with someone. It plays a big part in keeping people together and staying satisfied.”
A fourth area known to modulate mood and self-esteem was less active in those who stayed together, something the scientists think may be linked to people forming stable and intimate bonds.
The psychologists also found they could spot signs of how happy a couple who stayed together would be in the scans taken three years earlier.
Xiaomeng Xu, the lead author of the study at Brown University in Rhode Island, said: “Factors present early in the early stages of romantic love seem to play a major role in the development and longevity of the relationship.
“Our data provides preliminary evidence that neural responses in the early stages of romantic love can predict relationship stability and quality up to 40 months later.
“The brain regions involved suggest that reward functions may be predictive for relationship stability.”
Researchers at the University of Pittsburgh School of Medicine and UPMC describe in PLoS ONE how an electrode array sitting on top of the brain enabled a 30-year-old paralyzed man to control the movement of a character on a computer screen in three dimensions with just his thoughts. It also enabled him to move a robot arm to touch a friend’s hand for the first time in the seven years since he was injured in a motorcycle accident.
With brain-computer interface (BCI) technology, the thoughts of Tim Hemmes, who sustained a spinal cord injury that left him unable to move his body below the shoulders, were interpreted by computer algorithms and translated into intended movement of a computer cursor and, later, a robot arm, explained lead investigator Wei Wang, Ph.D., assistant professor, Department of Physical Medicine and Rehabilitation, Pitt School of Medicine.
“When Tim reached out to high-five me with the robotic arm, we knew this technology had the potential to help people who cannot move their own arms achieve greater independence,” said Dr. Wang, reflecting on a memorable scene from September 2011 that was re-told in stories around the world. “It’s very important that we continue this effort to fulfill the promise we saw that day.”
Six weeks before the implantation surgery, the team conducted functional magnetic resonance imaging (fMRI) of Mr. Hemmes’ brain while he watched videos of arm movement. They used that information to place a postage stamp-size electrocortigraphy (ECoG) grid of 28 recording electrodes on the surface of the brain region that fMRI showed controlled right arm and hand movement. Wires from the device were tunneled under the skin of his neck to emerge from his chest where they could be connected to computer cables as necessary.
For 12 days at his home and nine days in the research lab, Mr. Hemmes began the testing protocol by watching a virtual arm move, which triggered neural signals that were sensed by the electrodes. Distinct signal patterns for particular observed movements were used to guide the up and down motion of a ball on a computer screen. Soon after mastering movement of the ball in two dimensions, namely up/down and right/left, he was able to also move it in/out with accuracy on a 3-dimensional display.
“During the learning process, the computer helped Tim hit his target smoothly by restricting how far off course the ball could wander,” Dr. Wang said. “We gradually took off the ‘training wheels,’ as we called it, and he was soon doing the tasks by himself with 100 percent brain control.”
The robot arm was developed by Johns Hopkins University’s Applied Physics Laboratory. Currently, Jan Scheuermann, of Whitehall, Pa., is testing another BCI technology at Pitt/UPMC.
![What’s Your Fish Thinking?
Studying the links between brain and behavior may have just gotten easier. For the first time, neuroscientists have found a way to watch neurons fire in an independently moving animal. Though the study was done in fish, it may hold clues to how the human brain works.
"This technique will really help us understand how we make sense of the world and why we behave the way we do," says Martin Meyer, a neuroscientist at King’s College London who was not involved in the work.
The study was carried out in zebrafish, a popular animal model because they’re small and easy to breed. More important, zebrafish larvae are transparent, which gives scientists an advantage in identifying the neural circuits that make them tick. Yet, under a typical optical microscope, neurons that are active and firing look much the same as their quieter counterparts. To see what neurons are active and when, neuroscientists have therefore developed a variety of indicators and dyes. For example, when a neuron fires, it is flooded with calcium ions, which can cause some of the dyes to light up.
Still, the approach has limitations. Traditionally, Meyer explains, researchers would immobilize the head or entire body of a zebrafish larvae so that they could get a clearer picture of what was happening inside the brain. Even so, it was difficult to interpret neural activity for just a few neurons and over a short period of time. Researchers needed a better way to study the zebrafish brain in real time.
Enter Junichi Nakai of Saitama University’s Brain Science Institute in Japan. He and colleagues selected a glowing marker known as green fluorescent protein (GFP) and linked it to a compound that would light up in the presence of large amounts of calcium. The researchers then inserted the DNA that codes for this marker into the zebrafish genome, tying it to a specific protein only found in neurons. This means that only actively firing neurons would fluoresce, and scientists could track neural activity without applying dye. Because the signal was stronger and clearer, researchers didn’t have to immobilize the larvae.
To test the setup, Nakai and colleagues sent the genetically engineered zebrafish larvae hunting for food. When the larvae see a swimming single-celled animal called a paramecium, they engage in what animal behaviorists call a prey capture response: They turn their heads toward the paramecium, swim at it, and finally eat it.
Using their newly developed imaging system, Nakai and colleagues associated the sight of moving paramecium and prey capture behavior with the activation of a group of neurons in the optic tectum, the visual center of the zebrafish brain. The neurons pulsed in tandem with the movements of the paramecium—a sudden dart of the one-celled organism caused a bright flash of neural activity in the zebrafish tectum. The tectum went silent if the paramecium stilled. Only moving prey interested the larvae, the team reports today in Current Biology. These particular neurons, Nakai proposes, are part of a specific visual-motor pathway that links the sight of moving prey with swimming behavior.
"It’s a good proof of principle study," Meyer says. "The most important thing is that they showed [the technique worked] on freely behaving fish."](http://40.media.tumblr.com/57d8e0f46011840b9cc259e779df793f/tumblr_mhi9fvSsbG1rog5d1o1_500.jpg)
Studying the links between brain and behavior may have just gotten easier. For the first time, neuroscientists have found a way to watch neurons fire in an independently moving animal. Though the study was done in fish, it may hold clues to how the human brain works.
"This technique will really help us understand how we make sense of the world and why we behave the way we do," says Martin Meyer, a neuroscientist at King’s College London who was not involved in the work.
The study was carried out in zebrafish, a popular animal model because they’re small and easy to breed. More important, zebrafish larvae are transparent, which gives scientists an advantage in identifying the neural circuits that make them tick. Yet, under a typical optical microscope, neurons that are active and firing look much the same as their quieter counterparts. To see what neurons are active and when, neuroscientists have therefore developed a variety of indicators and dyes. For example, when a neuron fires, it is flooded with calcium ions, which can cause some of the dyes to light up.
Still, the approach has limitations. Traditionally, Meyer explains, researchers would immobilize the head or entire body of a zebrafish larvae so that they could get a clearer picture of what was happening inside the brain. Even so, it was difficult to interpret neural activity for just a few neurons and over a short period of time. Researchers needed a better way to study the zebrafish brain in real time.
Enter Junichi Nakai of Saitama University’s Brain Science Institute in Japan. He and colleagues selected a glowing marker known as green fluorescent protein (GFP) and linked it to a compound that would light up in the presence of large amounts of calcium. The researchers then inserted the DNA that codes for this marker into the zebrafish genome, tying it to a specific protein only found in neurons. This means that only actively firing neurons would fluoresce, and scientists could track neural activity without applying dye. Because the signal was stronger and clearer, researchers didn’t have to immobilize the larvae.
To test the setup, Nakai and colleagues sent the genetically engineered zebrafish larvae hunting for food. When the larvae see a swimming single-celled animal called a paramecium, they engage in what animal behaviorists call a prey capture response: They turn their heads toward the paramecium, swim at it, and finally eat it.
Using their newly developed imaging system, Nakai and colleagues associated the sight of moving paramecium and prey capture behavior with the activation of a group of neurons in the optic tectum, the visual center of the zebrafish brain. The neurons pulsed in tandem with the movements of the paramecium—a sudden dart of the one-celled organism caused a bright flash of neural activity in the zebrafish tectum. The tectum went silent if the paramecium stilled. Only moving prey interested the larvae, the team reports today in Current Biology. These particular neurons, Nakai proposes, are part of a specific visual-motor pathway that links the sight of moving prey with swimming behavior.
"It’s a good proof of principle study," Meyer says. "The most important thing is that they showed [the technique worked] on freely behaving fish."