Neuroscience

Articles and news from the latest research reports.

142 notes

A New Brain-Based Marker of Stress Susceptibility

Some people can handle stressful situations better than others, and it’s not all in their genes: Even identical twins show differences in how they respond.

image

(Image: iStockphoto)

Researchers have identified a specific electrical pattern in the brains of genetically identical mice that predicts how well individual animals will fare in stressful situations.

The findings, published July 29 in Nature Communications, may eventually help researchers prevent potential consequences of chronic stress — such as post-traumatic stress disorder, depression and other psychiatric disorders — in people who are prone to these problems.

“In soldiers, we have this dramatic, major stress exposure, and in some individuals it’s leading to major issues, such as problems sleeping or being around other people,” said senior author Kafui Dzirasa, M.D., Ph.D., an assistant professor of psychiatry and behavioral sciences at Duke University Medical Center and a member of the Duke Institute for Brain Sciences. “If we can find that common trigger or common pathway and tune it, we may be able to prevent the emergence of a range of mental illnesses down the line.”

In the new study, Dzirasa’s team analyzed the interaction between two interconnected brain areas that control fear and stress responses in both mice and men: the prefrontal cortex and the amygdala. The amygdala plays a role in the ‘fight-or-flight’ response. The prefrontal cortex is involved in planning and other higher-level functions. It suppresses the amygdala’s reactivity to danger and helps people continue to function in stressful situations.

Implanting electrodes into the brains of the mice allowed the researchers to listen in on the tempo at which the prefrontal cortex and the amygdala were firing and how tightly the two areas were linked — with the ultimate goal of figuring whether the electrical pattern of cross talk could help decide how well animals would respond when faced with an acute stressor.

Indeed, in mice that had been subjected to a chronically stressful situation — daily exposure to an aggressive male mouse for about two weeks — the degree to which the prefrontal cortex seemed to control amygdala activity was related to how well the animals coped with the stress, the group found.

Next the group looked at how the brain reacted to the first instance of stress, before the mice were put in a chronically stressful situation. The mice more sensitive to chronic stress showed greater activation of their prefrontal cortex-amygdala circuit, compared with resilient mice.

“We were really both surprised and excited to find that this signature was present in the animals before they were chronically stressed,” Dzirasa said. “You can find this signature the very first time they were ever exposed to this aggressive dangerous experience.”

Dzirasa hopes to use the signatures to come up with potential treatments for stress. “If we pair the signatures and treatments together, can we prevent symptoms from emerging, even when an animal is stressed? That’s the first question,” he said.

The group also hopes to delve further into the brain, to see whether the circuit-level patterns can interact with genetic variations that confer risk for psychiatric disorders such as schizophrenia. The new study will enable Dzirasa and other basic researchers to segregate stress-susceptible and resilient animals before they are subjected to stress and look at their molecular, cellular and systemic differences.

(Source: today.duke.edu)

Filed under chronic stress stress prefrontal cortex amygdala neuroscience science

264 notes

Social origins of intelligence in the brain

By studying the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds during the war, scientists are tackling — and beginning to answer — longstanding questions about how the brain works.

image

The researchers found that brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. This finding bolsters the view that general intelligence emerges from the emotional and social context of one’s life.

The findings are reported in the journal Brain.

“We are trying to understand the nature of general intelligence and to what extent our intellectual abilities are grounded in social cognitive abilities,” said Aron Barbey, a University of Illinois professor of neuroscience, of psychology, and of speech and hearing science. Barbey (bar-BAY), an affiliate of the Beckman Institute and of the Institute for Genomic Biology at the U. of I., led the new study with an international team of collaborators.

Studies in social psychology indicate that human intellectual functions originate from the social context of everyday life, Barbey said.

“We depend at an early stage of our development on social relationships — those who love us care for us when we would otherwise be helpless,” he said.

Social interdependence continues into adulthood and remains important throughout the lifespan, Barbey said.

“Our friends and family tell us when we could make bad mistakes and sometimes rescue us when we do,” he said. “And so the idea is that the ability to establish social relationships and to navigate the social world is not secondary to a more general cognitive capacity for intellectual function, but that it may be the other way around. Intelligence may originate from the central role of relationships in human life and therefore may be tied to social and emotional capacities.”

The study involved 144 Vietnam veterans injured by shrapnel or bullets that penetrated the skull, damaging distinct brain tissues while leaving neighboring tissues intact. Using CT scans, the scientists painstakingly mapped the affected brain regions of each participant, then pooled the data to build a collective map of the brain.

The researchers used a battery of carefully designed tests to assess participants’ intellectual, emotional and social capabilities. They then looked for patterns that tied damage to specific brain regions to deficits in the participants’ ability to navigate the intellectual, emotional or social realms. Social problem solving in this analysis primarily involved conflict resolution with friends, family and peers at work.

As in their earlier studies of general intelligence and emotional intelligence, the researchers found that regions of the frontal cortex (at the front of the brain), the parietal cortex (further back near the top of the head) and the temporal lobes (on the sides of the head behind the ears) are all implicated in social problem solving. The regions that contributed to social functioning in the parietal and temporal lobes were located only in the brain’s left hemisphere, while both left and right frontal lobes were involved.

The brain networks found to be important to social adeptness were not identical to those that contribute to general intelligence or emotional intelligence, but there was significant overlap, Barbey said.

“The evidence suggests that there’s an integrated information-processing architecture in the brain, that social problem solving depends upon mechanisms that are engaged for general intelligence and emotional intelligence,” he said. “This is consistent with the idea that intelligence depends to a large extent on social and emotional abilities, and we should think about intelligence in an integrated fashion rather than making a clear distinction between cognition and emotion and social processing. This makes sense because our lives are fundamentally social — we direct most of our efforts to understanding others and resolving social conflict. And our study suggests that the architecture of intelligence in the brain may be fundamentally social, too.”

(Source: news.illinois.edu)

Filed under intelligence social intelligence social interaction frontal lobe neuroscience science

135 notes

At last, hope for ALS patients?

U of T researchers have found a missing link that helps to explain how ALS, one of the world’s most feared diseases, paralyses and ultimately kills its victims. The breakthrough is helping them trace a path to a treatment or even a cure.

“ALS research has been taking baby steps for decades, but this has recently started changing to giant leaps,” said Karim Mekhail, professor in the Faculty of Medicine’s Department of Laboratory Medicine and Pathobiology.  “The disease is linked to a large number of genes, and previously, every time someone studied one of them, it took them off in a different direction. Nobody could figure out how they were all connected.”

Mekhail and his team discovered the function of a crucial gene called PBP1 or ATAXIN2 that’s often missing in ALS, also known as Lou Gehrig’s Disease.  Learning how this gene functions has helped them connect a lot of dots.

“This is an extremely important finding that may help us to better understand and target the pathways involved in neurodegenerative disease,” said Lorne Zinman, professor of medicine at U of T and medical director of the ALS/Neuromuscular Clinic at Sunnybrook Health Sciences Centre. “The next step will be to determine if this finding is applicable to patients with ALS.”

The key lies in a peculiarity of the human genome. It starts with the DNA, the blueprint that contains all our genetic information. The DNA passes its information to the RNA, which floats off to make proteins that help run our bodies. However, without ATAXIN2, the RNA fails to float away. It becomes glued to the DNA and forms RNA-DNA hybrids, said Mekhail. These hybrids gum up the works and stop other RNA from fully forming. Pieces of half-created RNA debris clutter the cell, and cause more hybrids.

“We think the debris and hybrids are on the same team in a never-ending Olympic relay race,” said Mekhail. “Over time there’s a vicious cycle building up. If we can find a way to disrupt that cycle, theoretically we can control or reverse the disease.”

On that front, Mekhail made a very surprising discovery: reducing calories to the minimum necessary amount stops the hybrids from forming in cells missing ATAXIN2. He and his team are studying whether a simple, non-toxic dietary restriction could be used with ALS patients, especially in the early stages or for those at risk of ALS.

Mekhail discovered the hybrids and missing genes in yeast cells and his results were published as the cover article for the July 28 edition of the journal Developmental Cell. Now his team is replicating this research on tissue from ALS patients – with very encouraging preliminary results.

“Within the next decade or two, I think there’s going to be a revolution in treatment for ALS and all kinds of brain disease,” he said. “These hybrids are going to play a role not just in ALS but in a lot of disease.”

(Source: media.utoronto.ca)

Filed under ALS Lou Gehrig’s disease ataxin2 yeast caloric restriction neuroscience science

151 notes

(Image caption: An abnormal protein, left, is intercepted by the UW’s compound that can bind to the toxic protein and neutralize it, as shown at right. Image courtesy: University of Washington)
New protein structure could help treat Alzheimer’s, related diseases
There is no cure for Alzheimer’s disease and other forms of dementia, but the research community is one step closer to finding treatment.
University of Washington bioengineers have designed a peptide structure that can stop the harmful changes of the body’s normal proteins into a state that’s linked to widespread diseases such as Alzheimer’s, Parkinson’s, heart disease, Type 2 diabetes and Lou Gehrig’s disease. The synthetic molecule blocks these proteins as they shift from their normal state into an abnormally folded form by targeting a toxic intermediate phase.
The discovery of a protein blocker could lead to ways to diagnose and even treat a large swath of diseases that are hard to pin down and rarely have a cure.
“If you can truly catch and neutralize the toxic version of these proteins, then you hopefully never get any further damage in the body,” said senior author Valerie Daggett, a UW professor of bioengineering. “What’s critical with this and what has never been done before is that a single peptide sequence will work against the toxic versions of a number of different amyloid proteins and peptides, regardless of their amino acid sequence or the normal 3-D structures.”
The findings were published online this month in the journal eLife.
More than 40 illnesses known as amyloid diseases – Alzheimer’s, Parkinson’s and rheumatoid arthritis are a few – are linked to the buildup of proteins after they have transformed from their normally folded, biologically active forms to abnormally folded, grouped deposits called fibrils or plaques. This happens naturally as we age, to a certain extent – our bodies don’t break down proteins as quickly as they should, causing higher concentrations in some parts of the body.
Each amyloid disease has a unique, abnormally folded protein or peptide structure, but often such diseases are misdiagnosed because symptoms can be similar and pinpointing which protein is present usually isn’t done until after death, in an autopsy.
As a result, many dementias are broadly diagnosed as Alzheimer’s disease without definitive proof, and other diseases can go undiagnosed and untreated.
The molecular structure of an amyloid protein can be only slightly different from a normal protein and can transform to a toxic state fairly easily, which is why amyloid diseases are so prevalent. The researchers built a protein structure, called “alpha sheet,” that complements the toxic structure of amyloid proteins that they discovered in computer simulations. The alpha sheet effectively attacks the toxic middle state the protein goes through as it transitions from normal to abnormal.
The structures could be tailored even further to bind specifically with the proteins in certain diseases, which could be useful for specific therapies.
The researchers hope their designed compounds could be used as diagnostics for amyloid diseases and as drugs to treat the diseases or at least slow progression.
“For example, patients could have a broad first-pass test done to see if they have an amyloid disease and then drill down further to determine which proteins are present to identify the specific disease,” Daggett said.

(Image caption: An abnormal protein, left, is intercepted by the UW’s compound that can bind to the toxic protein and neutralize it, as shown at right. Image courtesy: University of Washington)

New protein structure could help treat Alzheimer’s, related diseases

There is no cure for Alzheimer’s disease and other forms of dementia, but the research community is one step closer to finding treatment.

University of Washington bioengineers have designed a peptide structure that can stop the harmful changes of the body’s normal proteins into a state that’s linked to widespread diseases such as Alzheimer’s, Parkinson’s, heart disease, Type 2 diabetes and Lou Gehrig’s disease. The synthetic molecule blocks these proteins as they shift from their normal state into an abnormally folded form by targeting a toxic intermediate phase.

The discovery of a protein blocker could lead to ways to diagnose and even treat a large swath of diseases that are hard to pin down and rarely have a cure.

“If you can truly catch and neutralize the toxic version of these proteins, then you hopefully never get any further damage in the body,” said senior author Valerie Daggett, a UW professor of bioengineering. “What’s critical with this and what has never been done before is that a single peptide sequence will work against the toxic versions of a number of different amyloid proteins and peptides, regardless of their amino acid sequence or the normal 3-D structures.”

The findings were published online this month in the journal eLife.

More than 40 illnesses known as amyloid diseases – Alzheimer’s, Parkinson’s and rheumatoid arthritis are a few – are linked to the buildup of proteins after they have transformed from their normally folded, biologically active forms to abnormally folded, grouped deposits called fibrils or plaques. This happens naturally as we age, to a certain extent – our bodies don’t break down proteins as quickly as they should, causing higher concentrations in some parts of the body.

Each amyloid disease has a unique, abnormally folded protein or peptide structure, but often such diseases are misdiagnosed because symptoms can be similar and pinpointing which protein is present usually isn’t done until after death, in an autopsy.

As a result, many dementias are broadly diagnosed as Alzheimer’s disease without definitive proof, and other diseases can go undiagnosed and untreated.

The molecular structure of an amyloid protein can be only slightly different from a normal protein and can transform to a toxic state fairly easily, which is why amyloid diseases are so prevalent. The researchers built a protein structure, called “alpha sheet,” that complements the toxic structure of amyloid proteins that they discovered in computer simulations. The alpha sheet effectively attacks the toxic middle state the protein goes through as it transitions from normal to abnormal.

The structures could be tailored even further to bind specifically with the proteins in certain diseases, which could be useful for specific therapies.

The researchers hope their designed compounds could be used as diagnostics for amyloid diseases and as drugs to treat the diseases or at least slow progression.

“For example, patients could have a broad first-pass test done to see if they have an amyloid disease and then drill down further to determine which proteins are present to identify the specific disease,” Daggett said.

Filed under alzheimer's disease fibrils peptides alpha sheet amyloid proteins neuroscience science

275 notes

Learning the smell of fear: Mothers teach babies their own fears via odor

Babies can learn what to fear in the first days of life just by smelling the odor of their distressed mothers, new research suggests. And not just “natural” fears: If a mother experienced something before pregnancy that made her fear something specific, her baby will quickly learn to fear it too — through the odor she gives off when she feels fear.

image

In the first direct observation of this kind of fear transmission, a team of University of Michigan Medical School and New York University studied mother rats who had learned to fear the smell of peppermint – and showed how they “taught” this fear to their babies in their first days of life through their alarm odor released during distress.

In a new paper in the Proceedings of the National Academy of Sciences, the team reports how they pinpointed the specific area of the brain where this fear transmission takes root in the earliest days of life.

Their findings in animals may help explain a phenomenon that has puzzled mental health experts for generations: how a mother’s traumatic experience can affect her children in profound ways, even when it happened long before they were born. 

The researchers also hope their work will lead to better understanding of why not all children of traumatized mothers, or of mothers with major phobias, other anxiety disorders or major depression, experience the same effects.

“During the early days of an infant rat’s life, they are immune to learning information about environmental dangers. But if their mother is the source of threat information, we have shown they can learn from her and produce lasting memories,” says Jacek Debiec, M.D., Ph.D., the U-M psychiatrist and neuroscientist who led the research.  

“Our research demonstrates that infants can learn from maternal expression of fear, very early in life,” he adds. “Before they can even make their own experiences, they basically acquire their mothers’ experiences. Most importantly, these maternally-transmitted memories are long-lived, whereas other types of infant learning, if not repeated, rapidly perish.”

Peering inside the fearful brain

Debiec, who treats children and mothers with anxiety and other conditions in the U-M Department of Psychiatry, notes that the research on rats allows scientists to see what’s going on inside the brain during fear transmission, in ways they could never do in humans.

He began the research during his fellowship at NYU with Regina Marie Sullivan, Ph.D., senior author of the new paper, and continues it in his new lab at U-M’s Molecular and Behavioral Neuroscience Institute.

The researchers taught female rats to fear the smell of peppermint by exposing them to mild, unpleasant electric shocks while they smelled the scent, before they were pregnant. Then after they gave birth, the team exposed the mothers to just the minty smell, without the shocks, to provoke the fear response. They also used a comparison group of female rats that didn’t fear peppermint.

They exposed the pups of both groups of mothers to the peppermint smell, under many different conditions with and without their mothers present.

Using special brain imaging, and studies of genetic activity in individual brain cells and cortisol in the blood, they zeroed in on a brain structure called the lateral amygdala as the key location for learning fears. During later life, this area is key to detecting and planning response to threats – so it makes sense that it would also be the hub for learning new fears.

But the fact that these fears could be learned in a way that lasted, during a time when the baby rat’s ability to learn any fears directly was naturally suppressed, is what makes the new findings so interesting, says Debiec.

The team even showed that the newborns could learn their mothers’ fears even when the mothers weren’t present. Just the piped-in scent of their mother reacting to the peppermint odor she feared was enough to make them fear the same thing.

image

Even when just the odor of the frightened mother was piped in to a chamber where baby rats were exposed to peppermint smell, the babies developed a fear of the same smell, and their blood cortisol levels rose when they smelled it.

And when the researchers gave the baby rats a substance that blocked activity in the amygdala, they failed to learn the fear of peppermint smell from their mothers. This suggests, Debiec says, that there may be ways to intervene to prevent children from learning irrational or harmful fear responses from their mothers, or reduce their impact.

 From animals to humans: next steps

The new research builds on what scientists have learned over time about the fear circuitry in the brain, and what can go wrong with it. That work has helped psychiatrists develop new treatments for human patients with phobias and other anxiety disorders – for instance, exposure therapy that helps them overcome fears by gradually confronting the thing or experience that causes their fear.

In much the same way, Debiec hopes that exploring the roots of fear in infancy, and how maternal trauma can affect subsequent generations, could help human patients. While it’s too soon to know if the same odor-based effect happens between human mothers and babies, the role of a mother’s scent in calming human babies has been shown.

Debiec, who hails from Poland, recalls working with the grown children of Holocaust survivors, who experienced nightmares, avoidance instincts and even flashbacks related to traumatic experiences they never had themselves. While they would have learned about the Holocaust from their parents, this deeply ingrained fear suggests something more at work, he says.

(Source: uofmhealth.org)

Filed under fear transmission fear amygdala corticosterone olfaction neuroscience science

273 notes

Glucose ‘control switch’ in the brain key to both types of diabetes
Researchers at Yale School of Medicine have pinpointed a mechanism in part of the brain that is key to sensing glucose levels in the blood, linking it to both type 1 and type 2 diabetes. The findings are published in the July 28 issue of Proceedings of the National Academies of Sciences.
“We’ve discovered that the prolyl endopeptidase enzyme — located in a part of the hypothalamus known as the ventromedial nucleus — sets a series of steps in motion that control glucose levels in the blood,” said lead author Sabrina Diano, professor in the Departments of Obstetrics, Gynecology & Reproductive Sciences, Comparative Medicine, and Neurobiology at Yale School of Medicine. “Our findings could eventually lead to new treatments for diabetes.”
The ventromedial nucleus contains cells that are glucose sensors. To understand the role of prolyl endopeptidase in this part of the brain, the team used mice that were genetically engineered with low levels of this enzyme. They found that in absence of this enzyme, mice had high levels of glucose in the blood and became diabetic.
Diano and her team discovered that this enzyme is important because it makes the neurons in this part of the brain sensitive to glucose. The neurons sense the increase in glucose levels and then tell the pancreas to release insulin, which is the hormone that maintains a steady level of glucose in the blood, preventing diabetes.
“Because of the low levels of endopeptidase, the neurons were no longer sensitive to increased glucose levels and could not control the release of insulin from the pancreas, and the mice developed diabetes.” said Diano, who is also a member of the Yale Program in Integrative Cell Signaling and Neurobiology of Metabolism.
Diano said the next step in this research is to identify the targets of this enzyme by understanding how the enzyme makes the neurons sense changes in glucose levels. “If we succeed in doing this, we could be able to regulate the secretion of insulin, and be able to prevent and treat type 2 diabetes,” she said.

Glucose ‘control switch’ in the brain key to both types of diabetes

Researchers at Yale School of Medicine have pinpointed a mechanism in part of the brain that is key to sensing glucose levels in the blood, linking it to both type 1 and type 2 diabetes. The findings are published in the July 28 issue of Proceedings of the National Academies of Sciences.

“We’ve discovered that the prolyl endopeptidase enzyme — located in a part of the hypothalamus known as the ventromedial nucleus — sets a series of steps in motion that control glucose levels in the blood,” said lead author Sabrina Diano, professor in the Departments of Obstetrics, Gynecology & Reproductive Sciences, Comparative Medicine, and Neurobiology at Yale School of Medicine. “Our findings could eventually lead to new treatments for diabetes.”

The ventromedial nucleus contains cells that are glucose sensors. To understand the role of prolyl endopeptidase in this part of the brain, the team used mice that were genetically engineered with low levels of this enzyme. They found that in absence of this enzyme, mice had high levels of glucose in the blood and became diabetic.

Diano and her team discovered that this enzyme is important because it makes the neurons in this part of the brain sensitive to glucose. The neurons sense the increase in glucose levels and then tell the pancreas to release insulin, which is the hormone that maintains a steady level of glucose in the blood, preventing diabetes.

“Because of the low levels of endopeptidase, the neurons were no longer sensitive to increased glucose levels and could not control the release of insulin from the pancreas, and the mice developed diabetes.” said Diano, who is also a member of the Yale Program in Integrative Cell Signaling and Neurobiology of Metabolism.

Diano said the next step in this research is to identify the targets of this enzyme by understanding how the enzyme makes the neurons sense changes in glucose levels. “If we succeed in doing this, we could be able to regulate the secretion of insulin, and be able to prevent and treat type 2 diabetes,” she said.

Filed under glucose diabetes ventromedial nucleus endopeptidase insulin medicine science

252 notes

The bit of your brain that signals how bad things could be

An evolutionarily ancient and tiny part of the brain tracks expectations about nasty events, finds new UCL research.

The study, published in Proceedings of the National Academy of Sciences, demonstrates for the first time that the human habenula, half the size of a pea, tracks predictions about negative events, like painful electric shocks, suggesting a role in learning from bad experiences.

image

Brain scans from 23 healthy volunteers showed that the habenula activates in response to pictures associated with painful electric shocks, with the opposite occurring for pictures that predicted winning money.

Previous studies in animals have found that habenula activity leads to avoidance as it suppresses dopamine, a brain chemical that drives motivation. In animals, habenula cells have been found to fire when bad things happen or are anticipated.

"The habenula tracks our experiences, responding more the worse something is expected to be," says senior author Dr Jonathan Roiser of the UCL Institute of Cognitive Neuroscience. "For example, the habenula responds much more strongly when an electric shock is almost certain than when it is unlikely. In this study we showed that the habenula doesn’t just express whether something leads to negative events or not; it signals quite how much bad outcomes are expected."

During the experiment, healthy volunteers were placed inside a functional magnetic resonance imaging (fMRI) scanner, and brain images were collected at high resolution because the habenula is so small. Volunteers were shown a random sequence of pictures each followed by a set chance of a good or bad outcome, occasionally pressing a button simply to show they were paying attention. Habenula activation tracked the changing expectation of bad and good events.

"Fascinatingly, people were slower to press the button when the picture was associated with getting shocked, even though their response had no bearing on the outcome." says lead author Dr Rebecca Lawson, also at the UCL Institute of Cognitive Neuroscience. "Furthermore, the slower people responded, the more reliably their habenula tracked associations with shocks. This demonstrates a crucial link between the habenula and motivated behaviour, which may be the result of dopamine suppression."

The habenula has previously been linked to depression, and this study shows how it could be involved in causing symptoms such low motivation, pessimism and a focus on negative experiences. A hyperactive habenula could cause people to make disproportionately negative predictions.

"Other work shows that ketamine, which has profound and immediate benefits in patients who failed to respond to standard antidepressant medication, specifically dampens down habenula activity," says Dr Roiser. "Therefore, understanding the habenula could help us to develop better treatments for treatment-resistant depression."

(Source: eurekalert.org)

Filed under habenula negative events dopamine ketamine experiences neuroscience science

318 notes

Memory relies on astrocytes, the brain’s lesser known cells
When you’re expecting something—like the meal you’ve ordered at a restaurant—or when something captures your interest, unique electrical rhythms sweep through your brain.
These waves are called gamma oscillations and they reflect a symphony of cells—both excitatory and inhibitory—playing together in an orchestrated way. Though their role has been debated, gamma waves have been associated with higher-level brain function, and disturbances in the patterns have been tied to schizophrenia, Alzheimer’s disease, autism, epilepsy and other disorders.
Now, new research from the Salk Institute shows that little known supportive cells in the brain known as astrocytes may in fact be major players that control these waves.
In a study published July 28 in the Proceedings of the National Academy of Sciences, Salk researchers report a new, unexpected strategy to turn down gamma oscillations, by disabling not neurons but astrocytes—cells type traditionally thought to provide more of a support role in the brain. In the process, the team showed that astrocytes, and the gamma oscillations they help shape, are critical for some forms of memory.
"This is what could be called a smoking gun," says co-author Terrence Sejnowski, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Sciences and a Howard Hughes Medical Institute investigator. "There are hundreds of papers linking gamma oscillations with attention and memory, but they are all correlational. This is the first time we have been able to do a causal experiment, where we selectively block gamma oscillations and show that it has a highly specific impact on how the brain interacts with the world."
A collaboration among the labs of Salk professors Sejnowski, Inder Verma and Stephen Heinemann found that activity in the form of calcium signaling in astrocytes immediately preceded gamma oscillations in the brains of mice. This suggested that astrocytes, which use many of the same chemical signals as neurons, could be influencing these oscillations.
To test their theory, the group used a virus carrying tetanus toxin to disable the release of chemicals released selectively from astrocytes, effectively eliminating the cells’ ability to communicate with neighboring cells. Neurons were unaffected by the toxin.
After adding a chemical to trigger gamma waves in the animals’ brains, the researchers found that brain tissue with disabled astrocytes produced shorter gamma waves than in tissue containing healthy cells. And after adding three genes that would allow the researchers to selectively turn on and off the tetanus toxin in astrocytes at will, they found that gamma waves were dampened in mice whose astrocytes were blocked from signaling. Turning off the toxin reversed this effect.
The mice with the modified astrocytes seemed perfectly healthy. But after several cognitive tests, the researchers found that they failed in one major area: novel object recognition. A healthy mouse spent more time with a new item placed in its environment than it did with familiar items, as expected.
In contrast, the group’s new mutant mouse treated all objects the same. “That turned out to be a spectacular result in the sense that novel object recognition memory was not just impaired, it was gone—as if we were deleting this one form of memory, leaving others intact,” Sejnowski says.
The results were surprising, in part because astrocytes operate on a seconds- or longer timescale whereas neurons signal far faster, on the millisecond scale. Because of that slower speed, no one suspected astrocytes were involved in the high-speed brain activity needed to make quick decisions.
"What I thought quite unique was the idea that astrocytes, traditionally considered only guardians and supporters of neurons and other cells, are also involved in the processing of information and in other cognitive behavior," says Verma, a professor in the Laboratory of Genetics and American Cancer Society Professor.
It’s not that astrocytes are quick—they’re still slower than neurons. But the new evidence suggests that astrocytes are actively supplying the right environment for gamma waves to occur, which in turn makes the brain more likely to learn and change the strength of its neuronal connections.
Sejnowski says that the behavioral result is just the tip of the iceberg. “The recognition system is hugely important,” he says, adding that it includes recognizing other people, places, facts and things that happened in the past. With this new discovery, scientists can begin to better understand the role of gamma waves in recognition memory, he adds.

Memory relies on astrocytes, the brain’s lesser known cells

When you’re expecting something—like the meal you’ve ordered at a restaurant—or when something captures your interest, unique electrical rhythms sweep through your brain.

These waves are called gamma oscillations and they reflect a symphony of cells—both excitatory and inhibitory—playing together in an orchestrated way. Though their role has been debated, gamma waves have been associated with higher-level brain function, and disturbances in the patterns have been tied to schizophrenia, Alzheimer’s disease, autism, epilepsy and other disorders.

Now, new research from the Salk Institute shows that little known supportive cells in the brain known as astrocytes may in fact be major players that control these waves.

In a study published July 28 in the Proceedings of the National Academy of Sciences, Salk researchers report a new, unexpected strategy to turn down gamma oscillations, by disabling not neurons but astrocytes—cells type traditionally thought to provide more of a support role in the brain. In the process, the team showed that astrocytes, and the gamma oscillations they help shape, are critical for some forms of memory.

"This is what could be called a smoking gun," says co-author Terrence Sejnowski, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Sciences and a Howard Hughes Medical Institute investigator. "There are hundreds of papers linking gamma oscillations with attention and memory, but they are all correlational. This is the first time we have been able to do a causal experiment, where we selectively block gamma oscillations and show that it has a highly specific impact on how the brain interacts with the world."

A collaboration among the labs of Salk professors Sejnowski, Inder Verma and Stephen Heinemann found that activity in the form of calcium signaling in astrocytes immediately preceded gamma oscillations in the brains of mice. This suggested that astrocytes, which use many of the same chemical signals as neurons, could be influencing these oscillations.

To test their theory, the group used a virus carrying tetanus toxin to disable the release of chemicals released selectively from astrocytes, effectively eliminating the cells’ ability to communicate with neighboring cells. Neurons were unaffected by the toxin.

After adding a chemical to trigger gamma waves in the animals’ brains, the researchers found that brain tissue with disabled astrocytes produced shorter gamma waves than in tissue containing healthy cells. And after adding three genes that would allow the researchers to selectively turn on and off the tetanus toxin in astrocytes at will, they found that gamma waves were dampened in mice whose astrocytes were blocked from signaling. Turning off the toxin reversed this effect.

The mice with the modified astrocytes seemed perfectly healthy. But after several cognitive tests, the researchers found that they failed in one major area: novel object recognition. A healthy mouse spent more time with a new item placed in its environment than it did with familiar items, as expected.

In contrast, the group’s new mutant mouse treated all objects the same. “That turned out to be a spectacular result in the sense that novel object recognition memory was not just impaired, it was gone—as if we were deleting this one form of memory, leaving others intact,” Sejnowski says.

The results were surprising, in part because astrocytes operate on a seconds- or longer timescale whereas neurons signal far faster, on the millisecond scale. Because of that slower speed, no one suspected astrocytes were involved in the high-speed brain activity needed to make quick decisions.

"What I thought quite unique was the idea that astrocytes, traditionally considered only guardians and supporters of neurons and other cells, are also involved in the processing of information and in other cognitive behavior," says Verma, a professor in the Laboratory of Genetics and American Cancer Society Professor.

It’s not that astrocytes are quick—they’re still slower than neurons. But the new evidence suggests that astrocytes are actively supplying the right environment for gamma waves to occur, which in turn makes the brain more likely to learn and change the strength of its neuronal connections.

Sejnowski says that the behavioral result is just the tip of the iceberg. “The recognition system is hugely important,” he says, adding that it includes recognizing other people, places, facts and things that happened in the past. With this new discovery, scientists can begin to better understand the role of gamma waves in recognition memory, he adds.

Filed under astrocytes memory gamma oscillations neuroscience science

77 notes

Scientists find 6 new genetic risk factors for Parkinson’s 
Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.
"Unraveling the genetic underpinnings of Parkinson’s is vital to understanding the multiple mechanisms involved in this complex disease, and hopefully, may one day lead to effective therapies," said Andrew Singleton, Ph.D., a scientist at the NIH’s National Institute on Aging (NIA) and senior author of the study.
Dr. Singleton and his colleagues collected and combined data from existing genome-wide association studies (GWAS), which allow scientists to find common variants, or subtle differences, in the genetic codes of large groups of individuals. The combined data included approximately 13,708 Parkinson’s disease cases and 95,282 controls, all of European ancestry.
The investigators identified potential genetic risk variants, which increase the chances that a person may develop Parkinson’s disease. Their results suggested that the more variants a person has, the greater the risk, up to three times higher, for developing the disorder in some cases.
"The study brought together a large international group of investigators from both public and private institutions who were interested in sharing data to accelerate the discovery of genetic risk factors for Parkinson’s disease," said Margaret Sutherland, Ph.D., a program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of NIH. "The advantage of this collaborative approach is highlighted in the identification of pathways and gene networks that may significantly increase our understanding of Parkinson’s disease."
To obtain the data, the researchers collaborated with multiple public and private organizations, including the U.S. Department of Defense, the Michael J. Fox Foundation, 23andMe and many international investigators.
Affecting millions of people worldwide, Parkinson’s disease is a degenerative disorder that causes movement problems, including trembling of the hands, arms, or legs, stiffness of limbs and trunk, slowed movements and problems with posture. Over time, patients may have difficulty walking, talking, or completing other simple tasks. Although nine genes have been shown to cause rare forms of Parkinson’s disease, scientists continue to search for genetic risk factors to provide a complete genetic picture of the disorder.
The researchers confirmed the results in another sample of subjects, including 5,353 patients and 5,551 controls. By comparing the genetic regions to sequences on a state-of-the-art gene chip called NeuroX, the researchers confirmed that 24 variants represent genetic risk factors for Parkinson’s disease, including six variants that had not been previously identified. The NeuroX gene chip contains the codes of approximately 24,000 common genetic variants thought to be associated with a broad spectrum of neurodegenerative disorders.
"The replication phase of the study demonstrates the utility of the NeuroX chip for unlocking the secrets of neurodegenerative disorders," said Dr. Sutherland. "The power of these high tech, data-driven genomic methods allows scientists to find the needle in the haystack that may ultimately lead to new treatments."
Some of the newly identified genetic risk factors are thought to be involved with Gaucher’s disease, regulating inflammation and the nerve cell chemical messenger dopamine as well as alpha-synuclein, a protein that has been shown to accumulate in the brains of some cases of Parkinson’s disease. Further research is needed to determine the roles of the variants identified in this study.

Scientists find 6 new genetic risk factors for Parkinson’s

Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.

"Unraveling the genetic underpinnings of Parkinson’s is vital to understanding the multiple mechanisms involved in this complex disease, and hopefully, may one day lead to effective therapies," said Andrew Singleton, Ph.D., a scientist at the NIH’s National Institute on Aging (NIA) and senior author of the study.

Dr. Singleton and his colleagues collected and combined data from existing genome-wide association studies (GWAS), which allow scientists to find common variants, or subtle differences, in the genetic codes of large groups of individuals. The combined data included approximately 13,708 Parkinson’s disease cases and 95,282 controls, all of European ancestry.

The investigators identified potential genetic risk variants, which increase the chances that a person may develop Parkinson’s disease. Their results suggested that the more variants a person has, the greater the risk, up to three times higher, for developing the disorder in some cases.

"The study brought together a large international group of investigators from both public and private institutions who were interested in sharing data to accelerate the discovery of genetic risk factors for Parkinson’s disease," said Margaret Sutherland, Ph.D., a program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of NIH. "The advantage of this collaborative approach is highlighted in the identification of pathways and gene networks that may significantly increase our understanding of Parkinson’s disease."

To obtain the data, the researchers collaborated with multiple public and private organizations, including the U.S. Department of Defense, the Michael J. Fox Foundation, 23andMe and many international investigators.

Affecting millions of people worldwide, Parkinson’s disease is a degenerative disorder that causes movement problems, including trembling of the hands, arms, or legs, stiffness of limbs and trunk, slowed movements and problems with posture. Over time, patients may have difficulty walking, talking, or completing other simple tasks. Although nine genes have been shown to cause rare forms of Parkinson’s disease, scientists continue to search for genetic risk factors to provide a complete genetic picture of the disorder.

The researchers confirmed the results in another sample of subjects, including 5,353 patients and 5,551 controls. By comparing the genetic regions to sequences on a state-of-the-art gene chip called NeuroX, the researchers confirmed that 24 variants represent genetic risk factors for Parkinson’s disease, including six variants that had not been previously identified. The NeuroX gene chip contains the codes of approximately 24,000 common genetic variants thought to be associated with a broad spectrum of neurodegenerative disorders.

"The replication phase of the study demonstrates the utility of the NeuroX chip for unlocking the secrets of neurodegenerative disorders," said Dr. Sutherland. "The power of these high tech, data-driven genomic methods allows scientists to find the needle in the haystack that may ultimately lead to new treatments."

Some of the newly identified genetic risk factors are thought to be involved with Gaucher’s disease, regulating inflammation and the nerve cell chemical messenger dopamine as well as alpha-synuclein, a protein that has been shown to accumulate in the brains of some cases of Parkinson’s disease. Further research is needed to determine the roles of the variants identified in this study.

Filed under parkinson's disease GWAS NeuroX genetics neuroscience science

145 notes

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)
New Tools Help Neuroscientists Analyze Big Data
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.
Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.
Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.
Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder
New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.
It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.
Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”
That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.
Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.
To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.
From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.
The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”
Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.
For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.
Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.
In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.
Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.
Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)

New Tools Help Neuroscientists Analyze Big Data

In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.

New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.

Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.

Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.

Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder

New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.

It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.

Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”

That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.

Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.

To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.

From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.

The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”

Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.

For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.

Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.

In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.

Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.

Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

Filed under brain activity zebrafish Thunder computational analysis neuroscience science

free counters