Neuroscience

Articles and news from the latest research reports.

152 notes

Neurons Get Their Neighbors To Take Out Their Trash 
Biologists have long considered cells to function like self-cleaning ovens, chewing up and recycling their own worn out parts as needed. But a new study challenges that basic principle, showing that some nerve cells found in the eye pass off their old energy-producing factories to neighboring support cells to be “eaten.” The find, which may bear on the roots of glaucoma, also has implications for Parkinson’s, Alzheimer’s, amyotrophic lateral sclerosis (ALS) and other diseases that involve a buildup of “garbage” in brain cells.
The study was led by Nicholas Marsh-Armstrong, Ph.D., a research scientist at the Kennedy Krieger Institute and an associate professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience, together with Mark H. Ellisman, Ph.D., a neuroscience professor at the University of California, San Diego. In a previous study, the two had seen hints that retinal ganglion cells, which transmit visual information from the eye to the brain, might be handing off bits of themselves to astrocytes, cells that surround and support the eye’s signal-transmitting neurons. They appeared to pass them to astrocytes at the optic nerve head, the beginning of the long tendril that connects retinal ganglion cells from the eye to the brain. Specifically, they suspected that the neuronal bits being passed on were mitochondria, which are known as the powerhouses of the cell.
To find out whether this was really the case, Marsh-Armstrong’s research group genetically modified mice so that they produced indicators that glowed in the presence of chewed up mitochondria. Ellisman’s group then used cutting-edge electron microscopy to reconstruct 3-D images of what was happening at the optic nerve head. The researchers saw that astrocytes were, indeed, breaking down large numbers of mitochondria from neighboring retinal ganglion cells.
“This was a very surprising study for us, because the findings go against the common understanding that each cell takes care of its own trash,” says Marsh-Armstrong. It is particularly interesting that the newly discovered process occurs at the optic nerve head, he notes, as that is the site thought to be at fault in glaucoma. He plans to investigate whether the mitochondria disposal process is relevant to this disease, the second leading cause of blindness worldwide.
But the implications of the results go beyond the optic nerve head, Marsh-Armstrong says, as a buildup of “garbage” inside cells causes neurodegenerative diseases such as Parkinson’s, Alzheimer’s and ALS. “By showing that this type of alternative disposal happens, we’ve opened up the door for others to investigate whether similar processes might be happening with other cell types and cellular parts other than mitochondria,” he says.

Neurons Get Their Neighbors To Take Out Their Trash

Biologists have long considered cells to function like self-cleaning ovens, chewing up and recycling their own worn out parts as needed. But a new study challenges that basic principle, showing that some nerve cells found in the eye pass off their old energy-producing factories to neighboring support cells to be “eaten.” The find, which may bear on the roots of glaucoma, also has implications for Parkinson’s, Alzheimer’s, amyotrophic lateral sclerosis (ALS) and other diseases that involve a buildup of “garbage” in brain cells.

The study was led by Nicholas Marsh-Armstrong, Ph.D., a research scientist at the Kennedy Krieger Institute and an associate professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience, together with Mark H. Ellisman, Ph.D., a neuroscience professor at the University of California, San Diego. In a previous study, the two had seen hints that retinal ganglion cells, which transmit visual information from the eye to the brain, might be handing off bits of themselves to astrocytes, cells that surround and support the eye’s signal-transmitting neurons. They appeared to pass them to astrocytes at the optic nerve head, the beginning of the long tendril that connects retinal ganglion cells from the eye to the brain. Specifically, they suspected that the neuronal bits being passed on were mitochondria, which are known as the powerhouses of the cell.

To find out whether this was really the case, Marsh-Armstrong’s research group genetically modified mice so that they produced indicators that glowed in the presence of chewed up mitochondria. Ellisman’s group then used cutting-edge electron microscopy to reconstruct 3-D images of what was happening at the optic nerve head. The researchers saw that astrocytes were, indeed, breaking down large numbers of mitochondria from neighboring retinal ganglion cells.

“This was a very surprising study for us, because the findings go against the common understanding that each cell takes care of its own trash,” says Marsh-Armstrong. It is particularly interesting that the newly discovered process occurs at the optic nerve head, he notes, as that is the site thought to be at fault in glaucoma. He plans to investigate whether the mitochondria disposal process is relevant to this disease, the second leading cause of blindness worldwide.

But the implications of the results go beyond the optic nerve head, Marsh-Armstrong says, as a buildup of “garbage” inside cells causes neurodegenerative diseases such as Parkinson’s, Alzheimer’s and ALS. “By showing that this type of alternative disposal happens, we’ve opened up the door for others to investigate whether similar processes might be happening with other cell types and cellular parts other than mitochondria,” he says.

Filed under brain cells retinal ganglion cells mitochondria neurodegenerative diseases astrocytes neuroscience science

145 notes

Groundbreaking model explains how the brain learns to ignore familiar stimuli

A neuroscientist from Trinity College Dublin has proposed a new, ground-breaking explanation for the fundamental process of ‘habituation’, which has never been completely understood by neuroscientists.

Typically, our response to a stimulus is reduced over time if we are repeatedly exposed to it. This process of habituation enables organisms to identify and selectively ignore irrelevant, familiar objects and events that they encounter again and again. Habituation therefore allows the brain to selectively engage with new stimuli, or those that it ‘knows’ to be relevant. For example, the unusual sensation created by a spider walking over our skin should elicit an appropriate evasive response, but the touch of a shirt or blouse on the same skin should be functionally ignored by the nervous system. If habituation does not occur, then such unimportant stimuli become distracting, which means that complex environments can become overwhelming.

The new perspective on the way habituation occurs has implications for our understanding of neuropsychiatric conditions, because normal habituation, emotional responses and attentional abilities are altered in several of these conditions. In particular, hypersensitivity to complex environments is common in individuals on the autism spectrum.

Habituation has long been recognised as the most fundamental form of learning, but it has never been satisfactorily explained. In a Perspective article just published in the leading international journal Neuron (embargoed copy), Professor of Neurogenetics in the School of Genetics & Microbiology at Trinity, Mani Ramaswami, explains habituation through what he terms the ‘negative-image model’. The model proposes and explains how a repeated activation of any group of neurons that respond to a given stimulus results in the build-up of ‘negative activation’, which inhibits responses from this same group of cells.

For example, the first view of an unfamiliar and scary face can trigger a fearful response. However after multiple exposures, the group of neurons activated by the face is less effective at activating fear centres because of increased inhibition on this same group of neurons. Significantly, a strong response to new faces persists for much longer in people on the autism spectrum. This matched increase in inhibition (the ‘negative image’), proposed to underlie habituation, is not normally consciously perceived but it can be revealed under particular conditions (see accompanying video for a visual example here).

Professor Ramaswami said: “This Perspective outlines scalable circuit mechanisms that can account for habituation to stimuli encoded by very small or very large assemblies of neurons. Its strength is its simplicity, its basis in experimental data, and its ability to explain many features of habituation. However, more high-quality studies of habituation mechanisms will be required to establish its generality.”

Professor of Experimental Brain Research at Trinity, and Director of the Trinity College Institute for Neuroscience, Shane O’Mara, said: “The arguments and ideas expressed by Professor Ramaswami should lead to additions and changes to our current text-book sections on habituation, which is a process of great relevance to cognition, attention and psychiatric disease. It is possible that highlighting the process of negative image formation as crucial for habituation will prove useful to clinical genetic studies of autism, by helping to place diverse autism susceptibility genes in a common biological pathway.”

(Source: eurekalert.org)

Filed under habituation ASD autism negative-image model neurons neuroscience science

234 notes

Self-repairing mechanism helps to preserve brain function in neurodegenerative diseases
New research, led by scientists at the University of Southampton, has found that neurogenesis, the self-repairing mechanism of the adult brain, can help to preserve brain function in neurodegenerative diseases such as Alzheimer’s, Prion or Parkinson’s.
The progressive degeneration and death of the brain, occurring in many neurodegenerative diseases, is often seen as an unstoppable and irrevocable process. However, the brain has some self-repairing potential that accounts for the renewal of certain neuronal populations living in the dentate gyrus, a simple cortical region that is part of the larger functional brain system controlling learning and memory, the hippocampus. This process is known as neurogenesis.
While increased neurogenesis has been reported in neurodegenerative diseases in the past, its significance is unclear. Now a research team, led by Dr Diego Gomez-Nicola from the Centre for Biological Sciences at the University of Southampton, has detected increased neurogenesis in the dentate gyrus that partially counteracts neuronal loss.
Using a model of prion disease from mice, the research identified the time-course of the generation of these newborn neurons and how they integrate into the brain circuitry. While this self-repairing mechanism is effective in maintaining some neuronal functions at early and mid-stages of the disease, it fails at more advanced phases. This highlights a temporal window for potential therapeutic intervention, in order to preserve the beneficial effects of enhanced neurogenesis.
Dr Gomez-Nicola says: “This study highlights the latent potential of the brain to orchestrate a self-repairing response. The continuation of this line of research is opening new avenues to identify what specific signals are used to promote this increased neurogenic response, with views focused in targeting neurogenesis as a therapeutic approach to promote the regeneration of lost neurons.”

Self-repairing mechanism helps to preserve brain function in neurodegenerative diseases

New research, led by scientists at the University of Southampton, has found that neurogenesis, the self-repairing mechanism of the adult brain, can help to preserve brain function in neurodegenerative diseases such as Alzheimer’s, Prion or Parkinson’s.

The progressive degeneration and death of the brain, occurring in many neurodegenerative diseases, is often seen as an unstoppable and irrevocable process. However, the brain has some self-repairing potential that accounts for the renewal of certain neuronal populations living in the dentate gyrus, a simple cortical region that is part of the larger functional brain system controlling learning and memory, the hippocampus. This process is known as neurogenesis.

While increased neurogenesis has been reported in neurodegenerative diseases in the past, its significance is unclear. Now a research team, led by Dr Diego Gomez-Nicola from the Centre for Biological Sciences at the University of Southampton, has detected increased neurogenesis in the dentate gyrus that partially counteracts neuronal loss.

Using a model of prion disease from mice, the research identified the time-course of the generation of these newborn neurons and how they integrate into the brain circuitry. While this self-repairing mechanism is effective in maintaining some neuronal functions at early and mid-stages of the disease, it fails at more advanced phases. This highlights a temporal window for potential therapeutic intervention, in order to preserve the beneficial effects of enhanced neurogenesis.

Dr Gomez-Nicola says: “This study highlights the latent potential of the brain to orchestrate a self-repairing response. The continuation of this line of research is opening new avenues to identify what specific signals are used to promote this increased neurogenic response, with views focused in targeting neurogenesis as a therapeutic approach to promote the regeneration of lost neurons.”

Filed under neurodegenerative diseases neurogenesis hippocampus dentate gyrus neuroscience science

276 notes

Hippocampal activity during music listening exposes the memory-boosting power of music
For the first time the hippocampus—a brain structure crucial for creating long-lasting memories—has been observed to be active in response to recurring musical phrases while listening to music. Thus, the hippocampal involvement in long-term memory may be less specific than previously thought, indicating that short and long-term memory processes may depend on each other after all.
The study was conducted at the University of Jyväskylä and the AMI Center of Aalto University, by a group of researchers led by Academy Professor Petri Toiviainen, the Finnish Centre for Interdisciplinary Music Research (CIMR) at the University of Jyväskylä, and Dr. Elvira Brattico, Aalto University and the University of Helsinki. Results of the study were published in Cortex, a journal devoted to the study of the nervous system and behaviour.
“Our study basically shows an increase of activity in the medial temporal lobe areas—best known for being essential for long term memory—when musical motifs in the piece were repeated. This means that the lobe areas are engaged in the short-term recognition of musical phrases,” explains Iballa Burunat, the leading author of the study. Dr. Brattico adds: “Importantly, this hadn’t been observed before in music neuroscience.”
A fundamental highlight of the study is the use of a setting that is more natural than those traditionally employed in neuroscience: the participants’ only task was to attentively listen to an Argentinian tango from beginning to end. This kind of music provides well-defined, salient musical motifs that are easy to follow. They can be used to study recognition processes in the brain without having to resort to sound created in a lab. By using this more realistic approach, the researchers were able to identify brain areas involved in motif tracking without having to rely on the participants’ ability to self-report, which would have constrained the study of brain processes.
“We think that our novel method allowed us to uncover this phenomenon. In other words, the identified areas may also be related to the formation of a more permanent memory trace of a musical piece, enabled precisely by the very use of a real-life stimulus (the recording of a live performance) in a realistic situation where participants just listen to the music as their brain responses are recorded,” Iballa Burunat goes on to explain. Listening to the music from beginning to end may have imprinted the participants with a long lasting memory of the tune. This might not be expected were the participants exposed to a simpler stimulus in controlled conditions, as is the case in most studies in music and memory.
Although a real-life setting may be sufficient to trigger the involvement of the hippocampus, another explanation could lie in music’s capacity to elicit emotions. “We cannot ignore music’s emotional power which is thought to be crucial for the mnemonic power of music as to how and what we remember. There is evidence on the robust integration of music, memory and emotion—take for instance autobiographical memories. So it wouldn’t be surprising that the emotional content of the music may well have been a factor in triggering these limbic responses,” she continues. This makes sense, since the chosen musical piece by Astor Piazzolla was a tribute to his father after his sudden death, and so the main purpose of the piece was to be of a deeply emotional nature”. Certainly, the hippocampus—as part of the limbic system—is connected to neural circuitry involved in emotional behavior, and ongoing research suggests that emotional events seem to be more memorable than neutral ones. The authors emphasize that these results should motivate similar approaches to study verbal or visual short term memory by tracking the themes or repetitive structures of a given stimulus. Moreover, the study has implications for neurodegenerative diseases associated with hippocampal atrophy, like Alzheimer’s. “Music may positively affect patients if used wisely to stimulate their hippocampi, and thus their memory system,” Academy Professor Petri Toiviainen indicates. A better understanding of the link between music and memory could have widespread repercussions, leading to novel interventions to rehabilitate or improve the life quality of patients with neurodegenerative conditions.

Hippocampal activity during music listening exposes the memory-boosting power of music

For the first time the hippocampus—a brain structure crucial for creating long-lasting memories—has been observed to be active in response to recurring musical phrases while listening to music. Thus, the hippocampal involvement in long-term memory may be less specific than previously thought, indicating that short and long-term memory processes may depend on each other after all.

The study was conducted at the University of Jyväskylä and the AMI Center of Aalto University, by a group of researchers led by Academy Professor Petri Toiviainen, the Finnish Centre for Interdisciplinary Music Research (CIMR) at the University of Jyväskylä, and Dr. Elvira Brattico, Aalto University and the University of Helsinki. Results of the study were published in Cortex, a journal devoted to the study of the nervous system and behaviour.

“Our study basically shows an increase of activity in the medial temporal lobe areas—best known for being essential for long term memory—when musical motifs in the piece were repeated. This means that the lobe areas are engaged in the short-term recognition of musical phrases,” explains Iballa Burunat, the leading author of the study. Dr. Brattico adds: “Importantly, this hadn’t been observed before in music neuroscience.”

A fundamental highlight of the study is the use of a setting that is more natural than those traditionally employed in neuroscience: the participants’ only task was to attentively listen to an Argentinian tango from beginning to end. This kind of music provides well-defined, salient musical motifs that are easy to follow. They can be used to study recognition processes in the brain without having to resort to sound created in a lab. By using this more realistic approach, the researchers were able to identify brain areas involved in motif tracking without having to rely on the participants’ ability to self-report, which would have constrained the study of brain processes.

“We think that our novel method allowed us to uncover this phenomenon. In other words, the identified areas may also be related to the formation of a more permanent memory trace of a musical piece, enabled precisely by the very use of a real-life stimulus (the recording of a live performance) in a realistic situation where participants just listen to the music as their brain responses are recorded,” Iballa Burunat goes on to explain. Listening to the music from beginning to end may have imprinted the participants with a long lasting memory of the tune. This might not be expected were the participants exposed to a simpler stimulus in controlled conditions, as is the case in most studies in music and memory.

Although a real-life setting may be sufficient to trigger the involvement of the hippocampus, another explanation could lie in music’s capacity to elicit emotions. “We cannot ignore music’s emotional power which is thought to be crucial for the mnemonic power of music as to how and what we remember. There is evidence on the robust integration of music, memory and emotion—take for instance autobiographical memories. So it wouldn’t be surprising that the emotional content of the music may well have been a factor in triggering these limbic responses,” she continues. This makes sense, since the chosen musical piece by Astor Piazzolla was a tribute to his father after his sudden death, and so the main purpose of the piece was to be of a deeply emotional nature”. Certainly, the hippocampus—as part of the limbic system—is connected to neural circuitry involved in emotional behavior, and ongoing research suggests that emotional events seem to be more memorable than neutral ones. The authors emphasize that these results should motivate similar approaches to study verbal or visual short term memory by tracking the themes or repetitive structures of a given stimulus. Moreover, the study has implications for neurodegenerative diseases associated with hippocampal atrophy, like Alzheimer’s. “Music may positively affect patients if used wisely to stimulate their hippocampi, and thus their memory system,” Academy Professor Petri Toiviainen indicates. A better understanding of the link between music and memory could have widespread repercussions, leading to novel interventions to rehabilitate or improve the life quality of patients with neurodegenerative conditions.

Filed under music hippocampus working memory neuroimaging neuroscience science

140 notes

Blocking brain’s ‘internal marijuana’ may trigger early Alzheimer’s deficits


A new study led by investigators at the Stanford University School of Medicine has implicated the blocking of endocannabinoids — signaling substances that are the brain’s internal versions of the psychoactive chemicals in marijuana and hashish — in the early pathology of Alzheimer’s disease.
A substance called A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain, the study demonstrates. The Stanford group is now trying to figure out the molecular details of how and where this interference occurs. Pinning down those details could pave the path to new drugs to stave off the defects in learning ability and memory that characterize Alzheimer’s.
In the study, published June 18 in Neuron, researchers analyzed A-beta’s effects on a brain structure known as the hippocampus. In all mammals, this midbrain structure serves as a combination GPS system and memory-filing assistant, along with other duties.
“The hippocampus tells us where we are in space at any given time,” said Daniel Madison, PhD, associate professor of molecular and cellular physiology and the study’s senior author. “It also processes new experiences so that our memories of them can be stored in other parts of the brain. It’s the filing secretary, not the filing cabinet.”
Surprise finding
Applying electrophysiological techniques to brain slices from rats, Madison and his associates examined a key hippocampal circuit, one of whose chief elements is a class of nerve cells called pyramidal cells. They wanted to see how the circuit’s different elements reacted to small amounts of A-beta, which is produced throughout the body but whose normal physiological functions have until now been ill-defined.
A surprise finding by Madison’s group suggests that in small, physiologically normal concentrations, A-beta tamps down a signal-boosting process that under certain conditions increases the odds that pyramidal nerve cells will transmit information they’ve received to other nerve cells down the line.


When incoming signals to the pyramidal tract build to high intensity, pyramidal cells adapt by becoming more inclined to fire than they normally are. This phenomenon, which neuroscientists call plasticity, is thought to underpin learning and memory. It ensures that volleys of high-intensity input — such as might accompany falling into a hole, burning one’s finger with a match, suddenly remembering where you buried the treasure or learning for the first time how to spell “cat” — are firmly stored in the brain’s memory vaults and more accessible to retrieval.
These intense bursts of incoming signals are the exception, not the rule. Pyramidal nerve cells constantly receive random beeps and burps from upstream nerve cells — effectively, noise in a highly complex, electrochemical signaling system. This calls for some quality control. Pyramidal cells are encouraged to ignore mere noise by another set of “wet blanket” nerve cells called interneurons. Like the proverbial spouse reading a newspaper at the kitchen table, interneurons continuously discourage pyramidal cells’ transmission of impulses to downstream nerve cells by steadily secreting an inhibitory substance — the molecular equivalent of yawning, eye-rolling and oft-muttered suggestions that this or that chatter is really not worth repeating to the world at large, so why not just shut up.
Passing along the message
But when the news is particularly significant, pyramidal cells squirt out their own “no, this is important, you shut up!” chemical — endocannabinoids — which bind to specialized receptors on the hippocampal interneurons, temporarily suppressing them and allowing impulses to continue coursing along the pyramidal cells to their follow-on peers.
A-beta is known to impair pyramidal-cell plasticity. But Madison’s research team showed for the first time how it does so. Small clusters consisting of just a few A-beta molecules render the interneuron’s endocannabinoid receptors powerless, leaving inhibition intact even in the face of important news and thus squashing plasticity.
While small A-beta clusters have been known for a decade to be toxic to nerve cells, this toxicity requires relatively long-term exposure, said Madison. The endocannabinoid-nullifying effect the new study revealed is much more transient. A possible physiological role for A-beta in the normal, healthy brain, he said, is that of supplying that organ’s sophisticated circuits with yet another, beneficial layer of discretion in processing information. Madison thinks this normal, everyday A-beta mechanism run wild may represent an entry point to the progressive and destructive stages of Alzheimer’s disease.
Exactly how A-beta blocks endocannabinoids’ action is not yet known. But, Madison’s group demonstrated, A-beta doesn’t stop them from reaching and binding to their receptors on interneurons. Rather, it interferes with something that binding ordinarily generates. (By analogy, turning the key in your car’s ignition switch won’t do much good if your battery is dead.)
Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability. Smoking or ingesting marijuana results in long-acting inhibition of interneurons by the herb’s active chemical, tetrahydrocannabinol. That is vastly different from short-acting endocannabinoid bursts precisely timed to occur only when a signal is truly worthy of attention.
“Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison, who is also a member of the interdisciplinary Stanford Bio-X institute. “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”
Besides, flooding the brain with external cannabinoids induces tolerance — it may reduce the number of endocannabinoid receptors on interneurons, impeding endocannabinoids’ ability to do their crucial job of opening the gates of learning and memory.

Blocking brain’s ‘internal marijuana’ may trigger early Alzheimer’s deficits

A new study led by investigators at the Stanford University School of Medicine has implicated the blocking of endocannabinoids — signaling substances that are the brain’s internal versions of the psychoactive chemicals in marijuana and hashish — in the early pathology of Alzheimer’s disease.

A substance called A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain, the study demonstrates. The Stanford group is now trying to figure out the molecular details of how and where this interference occurs. Pinning down those details could pave the path to new drugs to stave off the defects in learning ability and memory that characterize Alzheimer’s.

In the study, published June 18 in Neuron, researchers analyzed A-beta’s effects on a brain structure known as the hippocampus. In all mammals, this midbrain structure serves as a combination GPS system and memory-filing assistant, along with other duties.

“The hippocampus tells us where we are in space at any given time,” said Daniel Madison, PhD, associate professor of molecular and cellular physiology and the study’s senior author. “It also processes new experiences so that our memories of them can be stored in other parts of the brain. It’s the filing secretary, not the filing cabinet.”

Surprise finding

Applying electrophysiological techniques to brain slices from rats, Madison and his associates examined a key hippocampal circuit, one of whose chief elements is a class of nerve cells called pyramidal cells. They wanted to see how the circuit’s different elements reacted to small amounts of A-beta, which is produced throughout the body but whose normal physiological functions have until now been ill-defined.

A surprise finding by Madison’s group suggests that in small, physiologically normal concentrations, A-beta tamps down a signal-boosting process that under certain conditions increases the odds that pyramidal nerve cells will transmit information they’ve received to other nerve cells down the line.

When incoming signals to the pyramidal tract build to high intensity, pyramidal cells adapt by becoming more inclined to fire than they normally are. This phenomenon, which neuroscientists call plasticity, is thought to underpin learning and memory. It ensures that volleys of high-intensity input — such as might accompany falling into a hole, burning one’s finger with a match, suddenly remembering where you buried the treasure or learning for the first time how to spell “cat” — are firmly stored in the brain’s memory vaults and more accessible to retrieval.

These intense bursts of incoming signals are the exception, not the rule. Pyramidal nerve cells constantly receive random beeps and burps from upstream nerve cells — effectively, noise in a highly complex, electrochemical signaling system. This calls for some quality control. Pyramidal cells are encouraged to ignore mere noise by another set of “wet blanket” nerve cells called interneurons. Like the proverbial spouse reading a newspaper at the kitchen table, interneurons continuously discourage pyramidal cells’ transmission of impulses to downstream nerve cells by steadily secreting an inhibitory substance — the molecular equivalent of yawning, eye-rolling and oft-muttered suggestions that this or that chatter is really not worth repeating to the world at large, so why not just shut up.

Passing along the message

But when the news is particularly significant, pyramidal cells squirt out their own “no, this is important, you shut up!” chemical — endocannabinoids — which bind to specialized receptors on the hippocampal interneurons, temporarily suppressing them and allowing impulses to continue coursing along the pyramidal cells to their follow-on peers.

A-beta is known to impair pyramidal-cell plasticity. But Madison’s research team showed for the first time how it does so. Small clusters consisting of just a few A-beta molecules render the interneuron’s endocannabinoid receptors powerless, leaving inhibition intact even in the face of important news and thus squashing plasticity.

While small A-beta clusters have been known for a decade to be toxic to nerve cells, this toxicity requires relatively long-term exposure, said Madison. The endocannabinoid-nullifying effect the new study revealed is much more transient. A possible physiological role for A-beta in the normal, healthy brain, he said, is that of supplying that organ’s sophisticated circuits with yet another, beneficial layer of discretion in processing information. Madison thinks this normal, everyday A-beta mechanism run wild may represent an entry point to the progressive and destructive stages of Alzheimer’s disease.

Exactly how A-beta blocks endocannabinoids’ action is not yet known. But, Madison’s group demonstrated, A-beta doesn’t stop them from reaching and binding to their receptors on interneurons. Rather, it interferes with something that binding ordinarily generates. (By analogy, turning the key in your car’s ignition switch won’t do much good if your battery is dead.)

Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability. Smoking or ingesting marijuana results in long-acting inhibition of interneurons by the herb’s active chemical, tetrahydrocannabinol. That is vastly different from short-acting endocannabinoid bursts precisely timed to occur only when a signal is truly worthy of attention.

“Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison, who is also a member of the interdisciplinary Stanford Bio-X institute. “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”

Besides, flooding the brain with external cannabinoids induces tolerance — it may reduce the number of endocannabinoid receptors on interneurons, impeding endocannabinoids’ ability to do their crucial job of opening the gates of learning and memory.

Filed under endocannabinoids alzheimer's disease pyramidal cells cannabinoids interneurons neuroscience science

177 notes

Study examines how brain ‘reboots’ itself to consciousness after anesthesia
One of the great mysteries of anesthesia is how patients can be temporarily rendered completely unresponsive during surgery and then wake up again, with all their memories and skills intact.
A new study by Dr. Andrew Hudson, an assistant professor of anesthesiology at the David Geffen School of Medicine at UCLA, and colleagues provides important clues about the processes used by structurally normal brains to navigate from unconsciousness back to consciousness. Their findings are currently available in the early online edition of Proceedings of the National Academy of Sciences.
Previous research has shown that the anesthetized brain is not “silent” under surgical levels of anesthesia but experiences certain patterns of activity, and it spontaneously changes its activity patterns over time, Hudson said.
For the current study, the research team recorded the brain’s electrical activity in a rodent model that had been administered the inhaled anesthesia isoflurane by placing electrodes in several brain areas associated with arousal and consciousness. They then slowly decreased the amount of anesthesia, as is done with patients in the operating room, monitoring how the electrical activity in the brain changed and looking for common activity patterns across all the study subjects.
The researchers found that the brain activity occurred in discrete clumps, or clusters, and that the brain did not jump between all of the clusters uniformly.
A small number of activity patterns consistently occurred in the anesthetized rodents, Hudson noted. The patterns depended on how much anesthesia the subject was receiving, and the brain would jump spontaneously from one activity pattern to another. A few activity patterns served as “hubs” on the way back to consciousness, connecting activity patterns consistent with deeper anesthesia to those observed under lighter anesthesia.
"Recovery from anesthesia, is not simply the result of the anesthetic ‘wearing off’ but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience," Hudson said. "Put simply, the brain reboots itself."
The study suggests a new way to think about the human brain under anesthesia and could encourage physicians to reexamine how they approach monitoring anesthesia in the operating room. Additionally, if the results are applicable to other disorders of consciousness — such as coma or minimally conscious states — doctors may be better able to predict functional recovery from brain injuries by looking at the spontaneously occurring jumps in brain activity.
In addition, this work provides some constraints for theories about how the brain leads to consciousness itself, Hudson said.
Going forward, the UCLA researchers will test other anesthetic agents to determine if they produce similar characteristic brain activity patterns with “hub” states. They also hope to better characterize how the brain jumps between patterns.

Study examines how brain ‘reboots’ itself to consciousness after anesthesia

One of the great mysteries of anesthesia is how patients can be temporarily rendered completely unresponsive during surgery and then wake up again, with all their memories and skills intact.

A new study by Dr. Andrew Hudson, an assistant professor of anesthesiology at the David Geffen School of Medicine at UCLA, and colleagues provides important clues about the processes used by structurally normal brains to navigate from unconsciousness back to consciousness. Their findings are currently available in the early online edition of Proceedings of the National Academy of Sciences.

Previous research has shown that the anesthetized brain is not “silent” under surgical levels of anesthesia but experiences certain patterns of activity, and it spontaneously changes its activity patterns over time, Hudson said.

For the current study, the research team recorded the brain’s electrical activity in a rodent model that had been administered the inhaled anesthesia isoflurane by placing electrodes in several brain areas associated with arousal and consciousness. They then slowly decreased the amount of anesthesia, as is done with patients in the operating room, monitoring how the electrical activity in the brain changed and looking for common activity patterns across all the study subjects.

The researchers found that the brain activity occurred in discrete clumps, or clusters, and that the brain did not jump between all of the clusters uniformly.

A small number of activity patterns consistently occurred in the anesthetized rodents, Hudson noted. The patterns depended on how much anesthesia the subject was receiving, and the brain would jump spontaneously from one activity pattern to another. A few activity patterns served as “hubs” on the way back to consciousness, connecting activity patterns consistent with deeper anesthesia to those observed under lighter anesthesia.

"Recovery from anesthesia, is not simply the result of the anesthetic ‘wearing off’ but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience," Hudson said. "Put simply, the brain reboots itself."

The study suggests a new way to think about the human brain under anesthesia and could encourage physicians to reexamine how they approach monitoring anesthesia in the operating room. Additionally, if the results are applicable to other disorders of consciousness — such as coma or minimally conscious states — doctors may be better able to predict functional recovery from brain injuries by looking at the spontaneously occurring jumps in brain activity.

In addition, this work provides some constraints for theories about how the brain leads to consciousness itself, Hudson said.

Going forward, the UCLA researchers will test other anesthetic agents to determine if they produce similar characteristic brain activity patterns with “hub” states. They also hope to better characterize how the brain jumps between patterns.

Filed under anesthesia consciousness brain activity neuroscience science

99 notes

Study Links Placental Marker of Prenatal Stress to Brain Mitochondrial Dysfunction

When a woman experiences a stressful event early in pregnancy, the risk of her child developing autism spectrum disorders or schizophrenia increases. Yet how maternal stress is transmitted to the brain of the developing fetus, leading to these problems in neurodevelopment, is poorly understood. 

New findings by University of Pennsylvania School of Veterinary Medicine scientists suggest that an enzyme found in the placenta is likely playing an important role. This enzyme, O-linked-N-acetylglucosamine transferase, or OGT, translates maternal stress into a reprogramming signal for the brain before birth.

image

(Image caption: Mice with reduced OGT in their placenta were shorter and leaner than their normal counterparts.)

“By manipulating this one gene, we were able to recapitulate many aspects of early prenatal stress,” said Tracy L. Bale, senior author on the paper and a professor in the Department of Animal Biology at Penn Vet. “OGT seems to be serving a role as the ‘canary in the coal mine,’ offering a readout of mom’s stress to change the baby’s developing brain.”

Bale also holds an appointment in the Department of Psychiatry in Penn’s Perelman School of Medicine. Her co-author is postdoctoral researcher Christopher L. Howerton. The paper was published online in PNAS this week.

OGT is known to play a role in gene expression through chromatin remodeling, a process that makes some genes more or less available to be converted into proteins. In a study published last year in PNAS, Bale’s lab found that placentas from male mice pups had lower levels of OGT than those from female pups, and placentas from mothers that had been exposed to stress early in gestation had lower overall levels of OGT than placentas from the mothers’ unstressed counterparts.

“People think that the placenta only serves to promote blood flow between a mom and her baby, but that’s really not all it’s doing,” Bale said. “It’s a very dynamic endocrine tissue and it’s sex-specific, and we’ve shown that tampering with it can dramatically affect a baby’s developing brain.”

To elucidate how reduced levels of OGT might be transmitting signals through the placenta to a fetus, Bale and Howerton bred mice that partially or fully lacked OGT in the placenta. They then compared these transgenic mice to animals that had been subjected to mild stressors during early gestation, such as predator odor, unfamiliar objects or unusual noises, during the first week of their pregnancies.

The researchers performed a genome-wide search for genes that were affected by the altered levels of OGT and were also affected by exposure to early prenatal stress using a specific activational histone mark and found a broad swath of common gene expression patterns.

They chose to focus on one particular differentially regulated gene called Hsd17b3, which encodes an enzyme that converts androstenedione, a steroid hormone, to testosterone. The researchers found this gene to be particularly interesting in part because neurodevelopmental disorders such as autism and schizophrenia have strong gender biases, where they either predominantly affect males or present earlier in males.

Placentas associated with male mice pups born to stressed mothers had reduced levels of the enzyme Hsd17b3, and, as a result, had higher levels of androstenedione and lower levels of testosterone than normal mice.

“This could mean that, with early prenatal stress, males have less masculinization,” Bale said. “This is important because autism tends to be thought of as the brain in a hypermasculinized state, and schizophrenia is thought of as a hypomasculinized state. It makes sense that there is something about this process of testosterone synthesis that is being disrupted.”

Furthermore, the mice born to mothers with disrupted OGT looked like the offspring of stressed mothers in other ways. Although they were born at a normal weight, their growth slowed at weaning. Their body weight as adults was 10-20 percent lower than control mice.

Because of the key role that that the hypothalamus plays in controlling growth and many other critical survival functions, the Penn Vet researchers then screened the mouse genome for genes with differential expression in the hypothalamus, comparing normal mice, mice with reduced OGT and mice born to stressed mothers.

They identified several gene sets related to the structure and function of mitochrondria, the powerhouses of cells that are responsible for producing energy. And indeed, when compared by an enzymatic assay that examines mitochondria biogenesis, both the mice born to stressed mothers and mice born to mothers with reduced OGT had dramatically reduced mitochondrial function in their hypothalamus compared to normal mice. These studies were done in collaboration with Narayan Avadhani’s lab at Penn Vet.

Such reduced function could explain why the growth patterns of mice appeared similar until weaning, at which point energy demands go up.

“If you have a really bad furnace you might be okay if temperatures are mild,” Bale said. “But, if it’s very cold, it can’t meet demand. It could be the same for these mice. If you’re in a litter close to your siblings and mom, you don’t need to produce a lot of heat, but once you wean you have an extra demand for producing heat. They’re just not keeping up.”

Bale points out that mitochondrial dysfunction in the brain has been reported in both schizophrenia and autism patients.

In future work, Bale hopes to identify a suite of maternal plasma stress biomarkers that could signal an increased risk of neurodevelopmental disease for the baby.

“With that kind of a signature, we’d have a way to detect at-risk pregnancies and think about ways to intervene much earlier than waiting to look at the term placenta,” she said.

Filed under prenatal stress mitochondria OGT neurodevelopmental disorders pregnancy hypothalamus neuroscience science

55 notes

How a new approach to funding Alzheimer’s research could pay off



More than 5 million Americans suffer from Alzheimer’s disease, the affliction that erodes memory and other mental capacities, but no drugs targeting the disease have been approved by the U.S. Food and Drug Administration since 2003. Now a paper by an MIT professor suggests that a revamped way of financing Alzheimer’s research could spur the development of useful new drugs for the illness.
“We are spending tremendous amounts of resources dealing with this disease, but we don’t have any effective therapies for it,” says Andrew Lo, the Charles E. and Susan T. Harris Professor of Finance and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “It really imposes a tremendous burden on society, not just for the afflicted, but also for those who care for them.”
Lo and three co-authors propose creating a public-private partnership that would fund research for a diverse array of drug-discovery projects simultaneously. Such an approach would increase the chances of a therapeutic breakthrough, they say, and the inclusion of public funding would help mitigate the risks and costs of Alzheimer’s research for the private sector.
There would be a long-term public-sector payoff, according to the researchers: Government funding for Alzheimer’s research would pale in comparison to the cost of caring for Alzheimer’s sufferers in public health-care programs. The paper’s model of the new funding approach calls for an outlay of $38.4 billion over 13 years for research; the costs of Medicare and Medicaid support for Alzheimer’s patients in 2014 alone is estimated to be $150 billion.
“Having parallel development would obviously decrease the waiting time, but it increases the short-run need for funding,” Lo says. “Given how much of an urgent need there is for Alzheimer’s therapies, it has to be the case that if you develop a cure, you’re going to be able to recoup your costs and then some.” In fact, the paper’s model estimates a double-digit return on public investment over the long run.
Lo adds: “Can we afford it? I think a more pressing question is, ‘Can we afford not to do something about this now?’”
Modeling the odds of success
The paper, “Parallel Discovery of Alzheimer’s Therapeutics,” was published today in Science Translational Medicine. Along with Lo, the co-authors of the piece are Carole Ho of the biotechnology firm Genentech, Jayna Cummings of MIT Sloan, and Kenneth Kosik of the University of California at Santa Barbara.
The main hypothesis on the cause of Alzheimer’s involves amyloid deposition, the buildup of plaques in the brain that impair neurological function; most biomedical efforts to tackle the disease have focused on this issue. For the study, Ho and Kosik, leading experts in Alzheimer’s research, compiled a list of 64 conceivable approaches to drug discovery, addressing a range of biological mechanisms that may be involved in the disease.
A fund backing that group of research projects might expand the chances of developing a drug that could, at a minimum, slow the progression of the disease. On the other hand, it might not increase the odds of success so much that pharmaceutical firms and biomedical investment funds would plow money into the problem.
“Sixty-four projects are a lot more than what’s being investigated today, but it’s still way shy of the 150 or 200 that are needed to mitigate the financial risks of an Alzheimer’s-focused fund,” Lo says.
The model assumes 13 years for the development of an individual drug, including clinical trials, and estimates the success rates for drug development. Given 150 trials, the odds of at least two successful trials are 99.59 percent. Two successful trials, Lo says, is what it would take to make the investment — a series of bonds issued by the fund — profitable and attractive to a broad range of investors.
“With a sufficiently high likelihood of success, you can issue debt to attract a large group of bondholders who would be willing to put their money to work,” Lo says. “The enormous size of bond markets translates into enormous potential funding opportunities for developing these therapeutics.”
Stakeholders everywhere
To be clear, Lo says, Alzheimer’s drug development is a very difficult task, since researchers often have to identify a pool of potential patients well before symptoms occur, in order to see how well therapies might work on delaying the onset of the disease.
Compared with the development of new drugs to treat other diseases, “Alzheimer’s drug development is more expensive, takes longer, and needs a larger sample of potential patients,” Lo acknowledges.
However, since the number of Americans suffering from Alzheimer’s is projected to double by 2050, according to the Alzheimer’s Association, an advocacy group, Lo stresses the urgency of the task at hand.”

How a new approach to funding Alzheimer’s research could pay off

More than 5 million Americans suffer from Alzheimer’s disease, the affliction that erodes memory and other mental capacities, but no drugs targeting the disease have been approved by the U.S. Food and Drug Administration since 2003. Now a paper by an MIT professor suggests that a revamped way of financing Alzheimer’s research could spur the development of useful new drugs for the illness.

“We are spending tremendous amounts of resources dealing with this disease, but we don’t have any effective therapies for it,” says Andrew Lo, the Charles E. and Susan T. Harris Professor of Finance and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “It really imposes a tremendous burden on society, not just for the afflicted, but also for those who care for them.”

Lo and three co-authors propose creating a public-private partnership that would fund research for a diverse array of drug-discovery projects simultaneously. Such an approach would increase the chances of a therapeutic breakthrough, they say, and the inclusion of public funding would help mitigate the risks and costs of Alzheimer’s research for the private sector.

There would be a long-term public-sector payoff, according to the researchers: Government funding for Alzheimer’s research would pale in comparison to the cost of caring for Alzheimer’s sufferers in public health-care programs. The paper’s model of the new funding approach calls for an outlay of $38.4 billion over 13 years for research; the costs of Medicare and Medicaid support for Alzheimer’s patients in 2014 alone is estimated to be $150 billion.

“Having parallel development would obviously decrease the waiting time, but it increases the short-run need for funding,” Lo says. “Given how much of an urgent need there is for Alzheimer’s therapies, it has to be the case that if you develop a cure, you’re going to be able to recoup your costs and then some.” In fact, the paper’s model estimates a double-digit return on public investment over the long run.

Lo adds: “Can we afford it? I think a more pressing question is, ‘Can we afford not to do something about this now?’”

Modeling the odds of success

The paper, “Parallel Discovery of Alzheimer’s Therapeutics,” was published today in Science Translational Medicine. Along with Lo, the co-authors of the piece are Carole Ho of the biotechnology firm Genentech, Jayna Cummings of MIT Sloan, and Kenneth Kosik of the University of California at Santa Barbara.

The main hypothesis on the cause of Alzheimer’s involves amyloid deposition, the buildup of plaques in the brain that impair neurological function; most biomedical efforts to tackle the disease have focused on this issue. For the study, Ho and Kosik, leading experts in Alzheimer’s research, compiled a list of 64 conceivable approaches to drug discovery, addressing a range of biological mechanisms that may be involved in the disease.

A fund backing that group of research projects might expand the chances of developing a drug that could, at a minimum, slow the progression of the disease. On the other hand, it might not increase the odds of success so much that pharmaceutical firms and biomedical investment funds would plow money into the problem.

“Sixty-four projects are a lot more than what’s being investigated today, but it’s still way shy of the 150 or 200 that are needed to mitigate the financial risks of an Alzheimer’s-focused fund,” Lo says.

The model assumes 13 years for the development of an individual drug, including clinical trials, and estimates the success rates for drug development. Given 150 trials, the odds of at least two successful trials are 99.59 percent. Two successful trials, Lo says, is what it would take to make the investment — a series of bonds issued by the fund — profitable and attractive to a broad range of investors.

“With a sufficiently high likelihood of success, you can issue debt to attract a large group of bondholders who would be willing to put their money to work,” Lo says. “The enormous size of bond markets translates into enormous potential funding opportunities for developing these therapeutics.”

Stakeholders everywhere

To be clear, Lo says, Alzheimer’s drug development is a very difficult task, since researchers often have to identify a pool of potential patients well before symptoms occur, in order to see how well therapies might work on delaying the onset of the disease.

Compared with the development of new drugs to treat other diseases, “Alzheimer’s drug development is more expensive, takes longer, and needs a larger sample of potential patients,” Lo acknowledges.

However, since the number of Americans suffering from Alzheimer’s is projected to double by 2050, according to the Alzheimer’s Association, an advocacy group, Lo stresses the urgency of the task at hand.”

Filed under alzheimer's disease drug development health medicine neuroscience science

98 notes

Portable brain-mapping device allows researchers to ‘see’ where memory fails student veterans

UT Arlington researchers have successfully used a portable brain-mapping device to show limited prefrontal cortex activity among student veterans with Post Traumatic Stress Disorder when they were asked to recall information from simple memorization tasks.

The study by bioengineering professor Hanli Liu and Alexa Smith-Osborne, an associate professor of social work, and two other collaborators was published in the May 2014 edition of NeuroImage: Clinical. The team used functional near infrared spectroscopy to map brain activity responses during cognitive activities related to digit learning and memory retrial.

Smith-Osborne has used the findings to guide treatment recommendations for some veterans through her work as principal investigator for UT Arlington’s Student Veteran Project, which offers free services to veterans who are undergraduates or who are considering returning to college.

“When we retest those student veterans after we’ve provided therapy and interventions, they’ve shown marked improvement,” Smith-Osborne said. “The fNIRS data have shown improvement in brain functions and responses after the student veterans have undergone treatment.”

Liu said this type of brain imaging allows us to “see” which brain region or regions fail to memorize or recall learned knowledge in student veterans with PTSD.

“It also shows how PTSD can affect the way we learn and our ability to recall information, so this new way of brain imaging advances our understanding of PTSD.” Liu said.

This study is multi-disciplinary, associating objective brain imaging with neurological disorders and social work.

While UT Arlington bioengineering faculty associate Fenghua Tian is the primary author assisted by bioengineering graduate research assistant Amarnath Yennu, collaborators of the study include UT Austin psychology professor Francisco Gonzalez-Lima and psychology professor Carol North with UT Southwestern Medical Center and the Veterans Administration North Texas Health Care System.

Khosrow Behbehani, dean of the UT Arlington College of Engineering, said this collaborative research is “allowing the researchers to objectively measure the changes in the level of oxygen in the brain and relate them to some of the brain functions that may have been adversely affected by trauma or stress.”  

Numerous neuropsychological studies have linked learning dysfunctions – such as memory loss, attention deficits and learning disabilities – with PTSD.

The new study involved 16 combat veterans previously diagnosed with PTSD who were experiencing distress and functional impairment affecting cognitive and related academic performance.  The veterans were directed to perform a series of number-ordering tasks on a computer while researchers monitored their brain activity through near infrared spectroscopy, a noninvasive neuroimaging technology.

The research found that participants with PTSD experienced significant difficulty recalling the given digits compared with a control group. This deficiency is closely associated with dysfunction of a portion in the right frontal cortex. The team also determined that near infrared spectroscopy was an effective tool for measuring cognitive dysfunction associated with PTSD.

With that information, Smith-Osborne said mental healthcare providers could customize a treatment plan best suited for that individual.

“It’s not a one-size-fits-all treatment plan but a concentrated effort to tailor the treatment based on where that person is on the learning scale,” Smith-Osborne said.

Smith-Osborne and Liu hope that their research results lead to better and more comprehensive care for veterans and a better college education.

(Source: uta.edu)

Filed under PTSD prefrontal cortex brain activity working memory neuroscience science

99 notes

(Image caption: Astrocyte activity is shown in green in this slice of tissue from the brain region that controls movement in mice. Internal, structural elements of the astrocytes are shown in magenta; cell bodies are in red. Credit: Amit Agarwal and Dwight Bergles, courtesy of Cell Press)
Fight-Or-Flight Chemical Prepares Cells to Shift the Brain From Subdued to Alert State
A new study from The Johns Hopkins University shows that the brain cells surrounding a mouse’s neurons do much more than fill space. According to the researchers, the cells, called astrocytes because of their star-shaped appearance, can monitor and respond to nearby neural activity, but only after being activated by the fight-or-flight chemical norepinephrine. Because astrocytes can alter the activity of neurons, the findings suggest that astrocytes may help control the brain’s ability to focus.
The study involved observing the cells in the brains of living, active mice over long periods of time. A combination of genetically engineered mice and advanced microscopy allowed the researchers to visualize the activity of astrocyte networks in different regions of the brain to learn how these abundant supporting cells are controlled.
The scientists monitored astrocytes in the area of the brain responsible for controlling movement and saw that the cells often increased their activity as the mice walked on treadmills — but not always, and sometimes astrocytes became active when the animals were not moving. This lack of consistency suggested to the researchers that the astrocytes were not responding to nearby neurons, as had been thought.
Similarly, astrocytes in the vision processing area of the brain did not necessarily become active when the mice were stimulated with light, but they were sometimes active, even in the dark. The team solved both mysteries when they tested the idea that the astrocytes needed a signal to “wake them up” before they could respond to nearby neurons. That is how they found that norepinephrine, the brain’s broadly distributed fight-or-flight signal, primes the astrocytes in both locations to “listen in” on nearby neuronal activity.
“Astrocytes are among the most abundant cells in the brain, but we know very little about how they are controlled and how they contribute to brain function,” says Dwight Bergles, Ph.D., professor of neuroscience, who led the study. “Since memory formation and other important functions of the brain require a state of attention, we’re interested in learning more about how astrocytes help create that state.”
For example, Bergles says, “We know that astrocytes can regulate local blood flow, provide energy to neurons and release signaling molecules that alter neuronal activity. They could be doing any or all of those things in response to being activated. It is also possible that they act as a sort of megaphone to broadcast local norepinephrine signals to every neuron in the brain.” Whatever the case may be, researchers now know that astrocytes are not idle loiterers. This ability to study astrocyte network activity in animals as they do different things will help to reveal how these cells contribute to brain function.
This research was published in the journal Neuron on June 18.

(Image caption: Astrocyte activity is shown in green in this slice of tissue from the brain region that controls movement in mice. Internal, structural elements of the astrocytes are shown in magenta; cell bodies are in red. Credit: Amit Agarwal and Dwight Bergles, courtesy of Cell Press)

Fight-Or-Flight Chemical Prepares Cells to Shift the Brain From Subdued to Alert State

A new study from The Johns Hopkins University shows that the brain cells surrounding a mouse’s neurons do much more than fill space. According to the researchers, the cells, called astrocytes because of their star-shaped appearance, can monitor and respond to nearby neural activity, but only after being activated by the fight-or-flight chemical norepinephrine. Because astrocytes can alter the activity of neurons, the findings suggest that astrocytes may help control the brain’s ability to focus.

The study involved observing the cells in the brains of living, active mice over long periods of time. A combination of genetically engineered mice and advanced microscopy allowed the researchers to visualize the activity of astrocyte networks in different regions of the brain to learn how these abundant supporting cells are controlled.

The scientists monitored astrocytes in the area of the brain responsible for controlling movement and saw that the cells often increased their activity as the mice walked on treadmills — but not always, and sometimes astrocytes became active when the animals were not moving. This lack of consistency suggested to the researchers that the astrocytes were not responding to nearby neurons, as had been thought.

Similarly, astrocytes in the vision processing area of the brain did not necessarily become active when the mice were stimulated with light, but they were sometimes active, even in the dark. The team solved both mysteries when they tested the idea that the astrocytes needed a signal to “wake them up” before they could respond to nearby neurons. That is how they found that norepinephrine, the brain’s broadly distributed fight-or-flight signal, primes the astrocytes in both locations to “listen in” on nearby neuronal activity.

“Astrocytes are among the most abundant cells in the brain, but we know very little about how they are controlled and how they contribute to brain function,” says Dwight Bergles, Ph.D., professor of neuroscience, who led the study. “Since memory formation and other important functions of the brain require a state of attention, we’re interested in learning more about how astrocytes help create that state.”

For example, Bergles says, “We know that astrocytes can regulate local blood flow, provide energy to neurons and release signaling molecules that alter neuronal activity. They could be doing any or all of those things in response to being activated. It is also possible that they act as a sort of megaphone to broadcast local norepinephrine signals to every neuron in the brain.” Whatever the case may be, researchers now know that astrocytes are not idle loiterers. This ability to study astrocyte network activity in animals as they do different things will help to reveal how these cells contribute to brain function.

This research was published in the journal Neuron on June 18.

Filed under astrocytes neural activity norepinephrine visual cortex neuroscience science

free counters