Neuroscience

Articles and news from the latest research reports.

Posts tagged animal model

70 notes

New Research Points to Biomarker that Could Track Huntington’s Disease Progression

A hallmark of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and Huntington’s is that by the time symptoms appear, significant brain damage has already occurred—and currently there are no treatments that can reverse it. A team of SRI International researchers has demonstrated that measurements of electrical activity in the brains of mouse models of Huntington’s disease could indicate the presence of disease before the onset of major symptoms. The findings, “Longitudinal Analysis of the Electroencephalogram and Sleep Phenotype in the R6/2 Mouse Model of Huntington’s Disease,” are published in the July 2013 issue of the neurology journal Brain, published by Oxford University Press.

image

SRI researchers led by Stephen Morairty, Ph.D., a director in the Center for Neuroscience in SRI Biosciences, and Simon Fisher, Ph.D., a postdoctoral fellow at SRI, used electroencephalography (EEG), a noninvasive method commonly used in humans, to measure changes in neuronal electrical activity in a mouse model of Huntington’s disease. Identification of significant changes in the EEG prior to the onset of symptoms would add to evidence that the EEG can be used to identify biomarkers to screen for the presence of a neurodegenerative disease. Further research on such potential biomarkers might one day enable the tracking of disease progression in clinical trials and could facilitate drug development.

“EEG signals are composed of different frequency bands such as delta, theta and gamma, much as light is composed of different frequencies that result in the colors we call red, green and blue,” explained Thomas Kilduff, Ph.D., senior director, Center for Neuroscience, SRI Biosciences. “Our research identified abnormalities in all three of these bands in Huntington’s disease mice. Importantly, the activity in the theta and gamma bands slowed as the disease progressed, indicating that we may be tracking the underlying disease process.”

EEG has shown promise as an indicator of underlying brain dysfunction in neurodegenerative diseases, which otherwise occurs surreptitiously until symptoms appear. Until now, most investigations of EEG in patients with neurodegenerative diseases and in animal models of neurodegenerative diseases have shown significant changes in EEG patterns only after disease symptoms occurred.

“Our breakthrough is that we have found an EEG signature that appears to be a biomarker for the presence of disease in this mouse model of Huntington’s disease that can identify early changes in the brain prior to the onset of behavioral symptoms,” said Morairty, the paper’s senior author. “While the current study focused on Huntington’s disease, many neurodegenerative diseases produce changes in the EEG that are associated with the degenerative process. This is the first step in being able to use the EEG to predict both the presence and progression of neurodegenerative diseases.”

Although previous studies have shown there are distinct and extensive changes in EEG patterns in Alzheimer’s and Huntington’s disease patients, researchers are looking for changes that may occur decades before disease onset.

Huntington’s disease is an inherited disorder that causes certain nerve cells in the brain to die, resulting in motor dysfunction, cognitive decline and psychiatric symptoms. It is the only major neurodegenerative disease where the cause is known with certainty: a genetic mutation that produces a change in a protein that is toxic to neurons.

(Source: sri.com)

Filed under neurodegenerative diseases huntington's disease neuronal activity biomarkers animal model neuroscience science

88 notes

Hearing loss from loud blasts may be treatable
Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.
Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.
“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)
“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.
The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.
More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.
The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.
Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.
Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.
In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.
After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.
“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.
“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”
Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.
“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”
Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.
“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

Hearing loss from loud blasts may be treatable

Long-term hearing loss from loud explosions, such as blasts from roadside bombs, may not be as irreversible as previously thought, according to a new study by researchers at the Stanford University School of Medicine.

Using a mouse model, the study found that loud blasts actually cause hair-cell and nerve-cell damage, rather than structural damage, to the cochlea, which is the auditory portion of the inner ear. This could be good news for the millions of soldiers and civilians who, after surviving these often devastating bombs, suffer long-term hearing damage.

“It means we could potentially try to reduce this damage,” said John Oghalai, MD, associate professor of otolaryngology and senior author of the study, published July 1 in PLOS ONE. If the cochlea, an extremely delicate structure, had been shredded and ripped apart by a large blast, as earlier studies have asserted, the damage would be irreversible. (Researchers presume that the damage seen in these previous studies may have been due to the use of older, less sophisticated imaging techniques.)

“The most common issue we see veterans for is hearing loss,” said Oghalai, a scientist and clinician who treats patients at Stanford Hospital & Clinics and directs the hearing center at Lucile Packard Children’s Hospital.

The increasingly common use of improvised explosive devices, or IEDs, around the world provided the impetus for the new study, which was primarily funded by the U.S. Department of Defense. Among veterans with service-connected disabilities, tinnitus — a constant ringing in the ears — is the most prevalent condition. Hearing loss is the second-most-prevalent condition. But the results of the study would prove true for anyone who is exposed to loud blasts from other sources, such as jet engines, air bags or gunfire.

More than 60 percent of wounded-in-action service members have eardrum injuries, tinnitus or hearing loss, or some combination of these, the study says. Twenty-eight percent of all military personnel experience some degree of hearing loss post-deployment. The most devastating effect of blast injury to the ear is permanent hearing loss due to trauma to the cochlea. But exactly how this damage is caused has not been well understood.

The ears are extremely fragile instruments. Sound waves enter the ear, causing the eardrums to vibrate. These vibrations get sent to the cochlea in the inner ear, where fluid carries them to rows of hair cells, which in turn stimulate auditory nerve fibers. These impulses are then sent to the brain via the auditory nerve, where they get interpreted as sounds.

Permanent hearing loss from loud noise begins at about 85 decibels, typical of a hair dryer or a food blender. IEDs have noise levels approaching 170 decibels.

Damage to the eardrum is known to be common after large blasts, but this is easily detected during a clinical exam and usually can heal itself — or is surgically repairable — and is thus not typically the cause of long-term hearing loss.

In order to determine exactly what is causing the permanent hearing loss, Stanford researchers created a mouse model to study the effects of noise blasts on the ear.

After exposing anesthetized mice to loud blasts, researchers examined the inner workings of the mouse ear from the eardrum to the cochlea. The ears were examined from day one through three months. A micro-CT scanner was used to image the workings of the ear after dissection.

“When we looked inside the cochlea, we saw the hair-cell loss and auditory-nerve-cell loss,” Oghalai said.

“With one loud blast, you lose a huge number of these cells. What’s nice is that the hair cells and nerve cells are not immediately gone. The theory now is that if the ear could be treated with certain medications right after the blast, that might limit the damage.”

Previous studies on larger animals had found that the cochlea was torn apart and shredded after exposure to a loud blast. Stanford scientists did not find this in the mouse model and speculate that the use of older research techniques may have caused the damage.

“We found that the blast trauma is similar to what we see from more lower noise exposure over time,” said Oghalai. “We lose the sensory hair cells that convert sound vibrations into electrical signals, and also the auditory nerve cells.”

Much of the resulting hearing loss after such blast damage to the ear is actually caused by the body’s immune response to the injured cells, Oghalai said. The creation of scar tissue to help heal the injury is a particular problem in the ear because the organ needs to vibrate to allow the hearing mechanism to work. Scar tissue damages that ability.

“There is going to be a window where we could stop whatever the body’s inflammatory response would be right after the blast,” Oghalai said. “We might be able to stop the damage. This will determine future research.”

Filed under hearing hearing loss animal model nerve cells cochlea inner ear hair cells neuroscience science

197 notes

Study Appears to Overturn Prevailing View of How the Brain is Wired
A series of studies conducted by Randy Bruno, PhD, and Christine Constantinople, PhD, of Columbia University’s Department of Neuroscience, topples convention by showing that sensory information travels to two places at once: not only to the brain’s mid-layer (where most axons lead), but also directly to its deeper layers. The study appears in the June 28, 2013, edition of the journal Science.
For decades, scientists have thought that sensory information is relayed from the skin, eyes, and ears to the thalamus and then processed in the six-layered cerebral cortex in serial fashion: first in the middle layer (layer 4), then in the upper layers (2 and 3), and finally in the deeper layers (5 and 6.) This model of signals moving through a layered “column” was largely based on anatomy, following the direction of axons—the wires of the nervous system.
“Our findings challenge dogma,” said Dr. Bruno, assistant professor of neuroscience and a faculty member at Columbia’s new Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “They open up a different way of thinking about how the cerebral cortex does what it does, which includes not only processing sight, sound, and touch but higher functions such as speech, decision-making, and abstract thought.”
The researchers used the well-understood sensory system of rat whiskers, which operate much like human fingers, providing tactile information about shape and texture. The system is ideal for studying the flow of sensory signals, said Dr. Bruno, because past research has mapped each whisker to a specific barrel-shaped cluster of neurons in the brain. “The wiring of these circuits is similar to those that process senses in other mammals, including humans,” said Dr. Bruno.
The study relied on a sensitive technique that allows researchers to monitor how signals move across synapses from one neuron to the next in a live animal. Using a glass micropipette with a tip only 1 micron wide (one-thousandth of a millimeter) filled with fluid that conducts nerve signals, the researchers recorded nerve impulses resulting from whisker stimulation in 176 neurons in the cortex and 76 neurons in the thalamus. The recordings showed that signals are relayed from the thalamus to layers 4 and 5 at the same time.  Although 80 percent of the thalamic axons went to layer 4, there was surprisingly robust signaling to the deeper layer.
To confirm that the deeper layer receives sensory information directly, the researchers used the local anesthetic lidocaine to block all signals from layer 4. Activity in the deeper layer remained unchanged.
“This was very surprising,” said Dr. Constantinople, currently a postdoctoral researcher at Princeton University’s Neuroscience Institute. “We expected activity in the lower layers to be turned off or very much diminished when we blocked layer 4. This raises a whole new set of questions about what the layers actually do.”
The study suggests that upper and lower layers of the cerebral cortex form separate circuits and play separate roles in processing sensory information. Researchers think that the deeper layers are evolutionarily older—they are found in reptiles, for example, while the upper and middle layers, appear in more evolved species and are thickest in humans.
One possibility, suggests Dr. Bruno, is that basic sensory processing is done in the lower layers: for example, visually tracking a tennis ball to coordinate the movement needed to make contact. Processing that involves integrating context or experience or that involves learning might be done in the upper layers. For example, watching where an opponent is hitting the ball and planning where to place the return shot.
“At this point, we still don’t know what, behaviorally, the different layers do,” said Dr. Bruno, whose lab is now focused on finding those answers.
Nobel-prize-winning neurobiologist Bert Sakmann, MD, PhD, of the Max Planck Institute in Germany, describes the study as “very convincing” and a game-changer. “For decades, the field has assumed, based largely on anatomy, that the work of the cortex begins in layer 4. Dr. Bruno has produced a technical masterpiece that firmly establishes two separate input streams to the cortex,” said Dr. Sakmann. “The prevailing view that the cortex is a collection of monolithic columns, handing off information to progressively higher modules, is an idea that will have to go.”2006-06-16 TC axon – high contrast MS1 repeat3-1
“Bruno’s work goes a long way toward overturning the conventional wisdom and provides new insight into the functional segregation of sensory input to the mammalian cerebral cortex, the region of the brain that processes our thoughts, decisions, and actions,” said Thomas Jessell, PhD, Claire Tow Professor of Motor Neuron Disorders in Neuroscience and a co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “Developing a more refined understanding of cortical processing will take the combined efforts of anatomists, cell and molecular biologists, and animal behaviorists. The Zuckerman Institute, with its multidisciplinary faculty and broad mission, is ideally suited to building on Bruno’s fascinating work.”

Study Appears to Overturn Prevailing View of How the Brain is Wired

A series of studies conducted by Randy Bruno, PhD, and Christine Constantinople, PhD, of Columbia University’s Department of Neuroscience, topples convention by showing that sensory information travels to two places at once: not only to the brain’s mid-layer (where most axons lead), but also directly to its deeper layers. The study appears in the June 28, 2013, edition of the journal Science.

For decades, scientists have thought that sensory information is relayed from the skin, eyes, and ears to the thalamus and then processed in the six-layered cerebral cortex in serial fashion: first in the middle layer (layer 4), then in the upper layers (2 and 3), and finally in the deeper layers (5 and 6.) This model of signals moving through a layered “column” was largely based on anatomy, following the direction of axons—the wires of the nervous system.

“Our findings challenge dogma,” said Dr. Bruno, assistant professor of neuroscience and a faculty member at Columbia’s new Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “They open up a different way of thinking about how the cerebral cortex does what it does, which includes not only processing sight, sound, and touch but higher functions such as speech, decision-making, and abstract thought.”

The researchers used the well-understood sensory system of rat whiskers, which operate much like human fingers, providing tactile information about shape and texture. The system is ideal for studying the flow of sensory signals, said Dr. Bruno, because past research has mapped each whisker to a specific barrel-shaped cluster of neurons in the brain. “The wiring of these circuits is similar to those that process senses in other mammals, including humans,” said Dr. Bruno.

The study relied on a sensitive technique that allows researchers to monitor how signals move across synapses from one neuron to the next in a live animal. Using a glass micropipette with a tip only 1 micron wide (one-thousandth of a millimeter) filled with fluid that conducts nerve signals, the researchers recorded nerve impulses resulting from whisker stimulation in 176 neurons in the cortex and 76 neurons in the thalamus. The recordings showed that signals are relayed from the thalamus to layers 4 and 5 at the same time.  Although 80 percent of the thalamic axons went to layer 4, there was surprisingly robust signaling to the deeper layer.

To confirm that the deeper layer receives sensory information directly, the researchers used the local anesthetic lidocaine to block all signals from layer 4. Activity in the deeper layer remained unchanged.

“This was very surprising,” said Dr. Constantinople, currently a postdoctoral researcher at Princeton University’s Neuroscience Institute. “We expected activity in the lower layers to be turned off or very much diminished when we blocked layer 4. This raises a whole new set of questions about what the layers actually do.”

The study suggests that upper and lower layers of the cerebral cortex form separate circuits and play separate roles in processing sensory information. Researchers think that the deeper layers are evolutionarily older—they are found in reptiles, for example, while the upper and middle layers, appear in more evolved species and are thickest in humans.

One possibility, suggests Dr. Bruno, is that basic sensory processing is done in the lower layers: for example, visually tracking a tennis ball to coordinate the movement needed to make contact. Processing that involves integrating context or experience or that involves learning might be done in the upper layers. For example, watching where an opponent is hitting the ball and planning where to place the return shot.

“At this point, we still don’t know what, behaviorally, the different layers do,” said Dr. Bruno, whose lab is now focused on finding those answers.

Nobel-prize-winning neurobiologist Bert Sakmann, MD, PhD, of the Max Planck Institute in Germany, describes the study as “very convincing” and a game-changer. “For decades, the field has assumed, based largely on anatomy, that the work of the cortex begins in layer 4. Dr. Bruno has produced a technical masterpiece that firmly establishes two separate input streams to the cortex,” said Dr. Sakmann. “The prevailing view that the cortex is a collection of monolithic columns, handing off information to progressively higher modules, is an idea that will have to go.”2006-06-16 TC axon – high contrast MS1 repeat3-1

“Bruno’s work goes a long way toward overturning the conventional wisdom and provides new insight into the functional segregation of sensory input to the mammalian cerebral cortex, the region of the brain that processes our thoughts, decisions, and actions,” said Thomas Jessell, PhD, Claire Tow Professor of Motor Neuron Disorders in Neuroscience and a co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “Developing a more refined understanding of cortical processing will take the combined efforts of anatomists, cell and molecular biologists, and animal behaviorists. The Zuckerman Institute, with its multidisciplinary faculty and broad mission, is ideally suited to building on Bruno’s fascinating work.”

Filed under cerebral cortex sensory system animal model whiskers nerve signals thalamus neuroscience science

162 notes

Missing Enzyme Linked to Drug Addiction

A missing brain enzyme increases concentrations of a protein related to pain-killer addiction, according to an animal study. The results were presented at The Endocrine Society’s 95th Annual Meeting in San Francisco.

image

Opioids are pain-killing drugs, derived from the opium plant, which block signals of pain between nerves in the body. They are manufactured in prescription medications like morphine and codeine, and also are found in some illegal drugs, like heroin. Both legal and illegal opioids can be highly addictive.

In addition to the synthetic opioids, natural opioids are produced by the body. Most people have heard of the so-called feel-good endorphins, which are opioid-like proteins produced by various organs in the body in response to certain activities, like exercise.

Drug addiction occurs, in part, because opioid-containing drugs alter the brain’s biochemical balance of naturally produced opioids. Nationwide, drug abuse of opioid-containing prescription drugs is skyrocketing, and researchers are trying to identify the risk factors that differentiate people who get addicted from those who do not.

In this particular animal model, researchers eliminated an enzyme called prohormone convertase 2, or PC2, which normally converts pre-hormonal substances into active hormones in certain parts of the brain. Previous research by this team demonstrated that PC2 levels increase after long-term morphine treatment, according to study lead author Theodore C. Friedman, MD, PhD, chairman of the internal medicine department at Charles R. Drew University of Medicine and Science in Los Angeles.

“This raises the possibility that PC2-derived peptides may be involved in some of the addiction parameters related to morphine,” Friedman said.

For this study, Friedman and his co-researchers analyzed the effects of morphine on the brain after knocking out the PC2 enzyme in mice. Morphine normally binds to a protein on cells known as the mu opioid receptor, or MOR. They found that MOR concentrations were higher in mice lacking PC2, compared to other mice.

To analyze the effects of PC2 elimination, the researchers examined MOR levels in specific parts of the brain that are related to pain relief, as well as to behaviors associated with reward and addiction. They measured these levels using a scientific test called immunohistochemistry, which uses specific antibodies to identify the cells in which proteins are expressed.

“In this study, we found that PC2 knockout mice have higher levels of MOR in brain regions related to drug addiction,” Friedman said. “We conclude that PC2 regulates endogenous opioids involved in the addiction response and in its absence, up-regulation of MOR expression occurs in key brain areas related to drug addiction.”

(Source: newswise.com)

Filed under drug addiction opioids brain prohormone convertase 2 enzymes animal model neuroscience science

188 notes

3-D map of blood vessels in cerebral cortex holds suprises
Blood vessels within a sensory area of the mammalian brain loop and connect in unexpected ways, a new map has revealed.
The study, published June 9 in the early online edition of Nature Neuroscience, describes vascular architecture within a well-known region of the cerebral cortex and explores what that structure means for functional imaging of the brain and the onset of a kind of dementia.
David Kleinfeld, professor of physics and neurobiology at the University of California, San Diego, and colleagues mapped blood vessels in an area of the mouse brain that receives sensory signals from the whiskers.
The organization of neural cells in this brain region is well-understood, as was a pattern of blood vessels that plunge from the surface of the brain and return from the depths, but the network in between was uncharted. Yet these tiny arterioles and venules deliver oxygen and nutrients to energy-hungry brain cells and carry away wastes.
The team traced this fine network by filling the vessels with a fluorescent gel. Then, using an automated system, developed by co-author Philbert Tsai, that removes thin layers of tissue with a laser while capturing a series of images to reconstructed the three-dimensional network of tiny vessels.
The project focused on a region of the cerebral cortex in which the nerve cells are so well known that they can be traced to individual whiskers. These neurons cluster in “barrels,” one per whisker, a pattern of organization seen in other sensory areas as well.
The scientists expected each whisker barrel to match up with its own blood supply, but that was not the case. The blood vessels don’t line up with the functional structure of the neurons they feed.
"This was a surprise, because the blood vessels develop in tandem with neural tissue," Kleinfeld said. Instead, microvessels beneath the surface loop and connect in patterns that don’t obviously correspond to the barrels.
To search for patterns, they turned to a branch of mathematics called graph theory, which describes systems as interconnected nodes. Using this approach, no hidden subunits emerged, demonstrating that the mesh indeed forms a continous network they call the “angiome.”
The vascular maps traced in this study raise a question of what we’re actually seeing in a widely used kind of brain imaging called functional MRI, which in one form measures brain activity by recording changes in oxygen levels in the blood. The idea is that activity will locally deplete oxygen. So they wiggled whiskers on individual mice and found that optical signals associated with depleted oxygen centered on the barrels, where electrical recordings confirmed neural activity. Thus brain mapping does not depend on a modular arrangement of blood vessels.
The researchers also went a step further to calculate patterns of blood flow based on the diameters and connections of the vessels and asked how this would change if a feeder arteriole were blocked. The map allowed them to identify “perfusion domains,” which predict the volumes of lesions that result when a clot occludes a vessel. Critically, they were able to build a physical model of how these lesions form, as may occur in cases of human dementia.
(Image: Andreas Weil)

3-D map of blood vessels in cerebral cortex holds suprises

Blood vessels within a sensory area of the mammalian brain loop and connect in unexpected ways, a new map has revealed.

The study, published June 9 in the early online edition of Nature Neuroscience, describes vascular architecture within a well-known region of the cerebral cortex and explores what that structure means for functional imaging of the brain and the onset of a kind of dementia.

David Kleinfeld, professor of physics and neurobiology at the University of California, San Diego, and colleagues mapped blood vessels in an area of the mouse brain that receives sensory signals from the whiskers.

The organization of neural cells in this brain region is well-understood, as was a pattern of blood vessels that plunge from the surface of the brain and return from the depths, but the network in between was uncharted. Yet these tiny arterioles and venules deliver oxygen and nutrients to energy-hungry brain cells and carry away wastes.

The team traced this fine network by filling the vessels with a fluorescent gel. Then, using an automated system, developed by co-author Philbert Tsai, that removes thin layers of tissue with a laser while capturing a series of images to reconstructed the three-dimensional network of tiny vessels.

The project focused on a region of the cerebral cortex in which the nerve cells are so well known that they can be traced to individual whiskers. These neurons cluster in “barrels,” one per whisker, a pattern of organization seen in other sensory areas as well.

The scientists expected each whisker barrel to match up with its own blood supply, but that was not the case. The blood vessels don’t line up with the functional structure of the neurons they feed.

"This was a surprise, because the blood vessels develop in tandem with neural tissue," Kleinfeld said. Instead, microvessels beneath the surface loop and connect in patterns that don’t obviously correspond to the barrels.

To search for patterns, they turned to a branch of mathematics called graph theory, which describes systems as interconnected nodes. Using this approach, no hidden subunits emerged, demonstrating that the mesh indeed forms a continous network they call the “angiome.”

The vascular maps traced in this study raise a question of what we’re actually seeing in a widely used kind of brain imaging called functional MRI, which in one form measures brain activity by recording changes in oxygen levels in the blood. The idea is that activity will locally deplete oxygen. So they wiggled whiskers on individual mice and found that optical signals associated with depleted oxygen centered on the barrels, where electrical recordings confirmed neural activity. Thus brain mapping does not depend on a modular arrangement of blood vessels.

The researchers also went a step further to calculate patterns of blood flow based on the diameters and connections of the vessels and asked how this would change if a feeder arteriole were blocked. The map allowed them to identify “perfusion domains,” which predict the volumes of lesions that result when a clot occludes a vessel. Critically, they were able to build a physical model of how these lesions form, as may occur in cases of human dementia.

(Image: Andreas Weil)

Filed under cerebral cortex blood vessels dementia oxygen levels blood flow animal model neuroscience science

18 notes

PD-Like Sleep and Motor Problems Observed in α-Synuclein Mutant Mice

The presence of Lewy bodies in nerve cells, formed by intracellular deposits of the protein α-synuclein, is a characteristic pathologic feature of Parkinson’s Disease (PD). In the quest for an animal model of PD that mimics motor and non-motor symptoms of human PD, scientists have developed strains of mice that overexpress α-synuclein. By studying a strain of mice bred to overexpress α-synuclein via the Thy-1 promoter, scientists have found these mice develop many of the age-related progressive motor symptoms of PD and demonstrate changes in sleep and anxiety. Their results are published in the latest issue of Journal of Parkinson’s Disease.

PD is the second most common neurodegenerative disorder in the United States, affecting approximately one million Americans and five million people worldwide. Its prevalence is projected to double by 2030. The most obvious symptoms are movement-related, such as involuntary shaking and muscle stiffness; non-motor symptoms, such as increases in anxiety and sleep disturbances, can appear prior to the onset of motor symptoms. Although the drug levodopa can relieve some symptoms, there is no cure – intensifying the pressure to find an animal model that can help clarify the pathological processes underlying human PD and find new medications to treat the pathology and/or relieve symptoms. 

Investigators at the National Institute on Aging compared wild type mice with specially bred mice that were transgenic for the A53T mutation of the human α-synuclein (SNCA) gene under the control of a human thymus cell antigen 1, theta (THY-1) promoter. As the mice aged, their motor performance on a rotarod test (which measures how long the mouse can remain on a rotating rod) became impaired and the length of their strides were significantly shorter than the wild type control mice.

The study also found that SNCA mice displayed fragmented nighttime activity patterns compared to wild type controls and appeared to have a reduced overall sleep time. “Despite the prevalence of abnormal sleep patterns in PD, very few studies to date have outlined sleep disturbances in animal models of PD,” says Sarah M. Rothman, PhD, a researcher with the National Institute on Aging, in Baltimore, MD.

Many PD patients typically show an increase in anxiety and depression, and in this respect the SNCA mouse model did not replicate the human condition. SNCA mice displayed an early and significant decrease in anxiety-like behavior that persisted throughout their lifespan, as shown by both open field and elevated plus maze tests (in which mice have the choice of spending time in open or closed arms of a maze). Other rodent models that utilize changes in expression of α-synuclein have also reported lower anxiety levels. The authors suggest that higher levels of serotonin found in the hypothalamus of the SNCA mice may be associated with the reduced anxiety observed.

The authors say it is important to remember that the SNCA “model utilizes the presence of a mutation that only occurs very rarely in PD. While all PD patients display α-synuclein pathology, they do not all express the mutated form of the protein,” says Dr. Rothman.

(Source: alphagalileo.org)

Filed under parkinson's disease α-synuclein sleep anxiety serotonin animal model motor performance neuroscience science

140 notes

Researchers focus on a brain protein and an antibiotic to block cocaine craving
A new study conducted by a team of Indiana University neuroscientists demonstrates that GLT1, a protein that clears glutamate from the brain, plays a critical role in the craving for cocaine that develops after only several days of cocaine use.
The study, appearing in The Journal of Neuroscience, showed that when rats taking large doses of cocaine are withdrawn from the drug, the production of GLT1 in the nucleus accumbens, a region of the brain implicated in motivation, begins to decrease. But if the rats receive ceftriaxone, an antibiotic used to treat meningitis, GLT1 production increases during the withdrawal period and decreases cocaine craving.
George Rebec, professor in the Department of Psychological and Brain Sciences, said drug craving depends on the release of glutamate, a neurotransmitter involved in motivated behavior. Glutamate is released in response to the cues associated with drug taking, so when addicts are exposed to these cues, their drug craving increases even if they have been away from the drug for some time.
The same behavior can be modeled in rats. When rats, who self-administer cocaine by pressing a lever that delivers the cocaine into their bodies, are withdrawn from the drug for several weeks, their craving returns if they are exposed to the cues that accompanied drug delivery in the past; in this case, a tone and light. But if the rats are treated with ceftriaxone during withdrawal, they no longer seek cocaine when the cues are presented.
Ceftriaxone appears to block craving by reversing the decrease in GLT1 caused by repeated exposure to cocaine. In fact, ceftriaxone increases GLT1, which allows glutamate to be cleared quickly from the brain. The Rebec research group localized this effect to the nucleus accumbens by showing that if GLT1 was blocked in this brain region even after ceftriaxone treatment, the rats would relapse.
While an earlier paper of Rebec’s group showed the effects of ceftriaxone on cocaine craving, the new paper was the first to localize the effects of ceftriaxone to the nucleus accumbens and was the first to show that ceftriaxone works after long withdrawal periods.
"The idea is that increasing GLT1 will prevent relapse. If we block GLT1, the ceftriaxone should not work," Rebec said. "We now have good evidence that ceftriaxone is acting on GLT1 and that the nucleus accumbens is the critical site."
Rebec said prior work on Huntington’s disease, a neurodegenerative disorder, alerted him and his team to the way ceftriaxone acts on the expression of GLT1, a protein that removes glutamate from the brain. Glutamate removal is a problem in Huntington’s disease, and Rebec’s team found that ceftriaxone increases GLT1 and improves neurological signs of the disease in mouse models.
It now is important to determine why cocaine decreases GLT1 and to see whether other drugs of abuse have the same effect. Rebec and colleagues have shown that ceftriaxone also can decrease the craving for alcohol in rats selectively bred to prefer alcohol.
Drug cues are one factor that can trigger relapse. Future work also will examine whether ceftriaxone can block drug craving induced by stress or by re-exposure to the drug. If so, it would mean that GLT1 could become an important target in the search for treatments to prevent drug relapse. Now, Rebec said, there are a number of factors to study. “We don’t yet know how long the effects of ceftriaxone last. Does an addict have to be on it for a month or will it lose its effectiveness? We don’t yet know what will happen.”
In the cocaine study, the rats self-administer cocaine for six hours a day for up to 11 days. Their behavior is much like that of a human addict.
"You might think that because they’re in there, they just take more, but they don’t just take more," Rebec said. "Like human addicts, they take the drug more and more rapidly and they want to get to it more and more quickly."
Withdrawal serves as an incubation period during which craving increases if it is activated by cues or other factors. “Something changes in the brain during that time to trigger the craving or make it more likely that you want the drug,” Rebec said. “That’s what ceftriaxone seems to be interfering with.”
Ceftriaxone is now in clinical trials on people with ALS, also known as Lou Gehrig’s disease, which has many mechanisms in common with other neurodegenerative diseases such as Huntington’s disease and Alzheimer’s.

Researchers focus on a brain protein and an antibiotic to block cocaine craving

A new study conducted by a team of Indiana University neuroscientists demonstrates that GLT1, a protein that clears glutamate from the brain, plays a critical role in the craving for cocaine that develops after only several days of cocaine use.

The study, appearing in The Journal of Neuroscience, showed that when rats taking large doses of cocaine are withdrawn from the drug, the production of GLT1 in the nucleus accumbens, a region of the brain implicated in motivation, begins to decrease. But if the rats receive ceftriaxone, an antibiotic used to treat meningitis, GLT1 production increases during the withdrawal period and decreases cocaine craving.

George Rebec, professor in the Department of Psychological and Brain Sciences, said drug craving depends on the release of glutamate, a neurotransmitter involved in motivated behavior. Glutamate is released in response to the cues associated with drug taking, so when addicts are exposed to these cues, their drug craving increases even if they have been away from the drug for some time.

The same behavior can be modeled in rats. When rats, who self-administer cocaine by pressing a lever that delivers the cocaine into their bodies, are withdrawn from the drug for several weeks, their craving returns if they are exposed to the cues that accompanied drug delivery in the past; in this case, a tone and light. But if the rats are treated with ceftriaxone during withdrawal, they no longer seek cocaine when the cues are presented.

Ceftriaxone appears to block craving by reversing the decrease in GLT1 caused by repeated exposure to cocaine. In fact, ceftriaxone increases GLT1, which allows glutamate to be cleared quickly from the brain. The Rebec research group localized this effect to the nucleus accumbens by showing that if GLT1 was blocked in this brain region even after ceftriaxone treatment, the rats would relapse.

While an earlier paper of Rebec’s group showed the effects of ceftriaxone on cocaine craving, the new paper was the first to localize the effects of ceftriaxone to the nucleus accumbens and was the first to show that ceftriaxone works after long withdrawal periods.

"The idea is that increasing GLT1 will prevent relapse. If we block GLT1, the ceftriaxone should not work," Rebec said. "We now have good evidence that ceftriaxone is acting on GLT1 and that the nucleus accumbens is the critical site."

Rebec said prior work on Huntington’s disease, a neurodegenerative disorder, alerted him and his team to the way ceftriaxone acts on the expression of GLT1, a protein that removes glutamate from the brain. Glutamate removal is a problem in Huntington’s disease, and Rebec’s team found that ceftriaxone increases GLT1 and improves neurological signs of the disease in mouse models.

It now is important to determine why cocaine decreases GLT1 and to see whether other drugs of abuse have the same effect. Rebec and colleagues have shown that ceftriaxone also can decrease the craving for alcohol in rats selectively bred to prefer alcohol.

Drug cues are one factor that can trigger relapse. Future work also will examine whether ceftriaxone can block drug craving induced by stress or by re-exposure to the drug. If so, it would mean that GLT1 could become an important target in the search for treatments to prevent drug relapse. Now, Rebec said, there are a number of factors to study. “We don’t yet know how long the effects of ceftriaxone last. Does an addict have to be on it for a month or will it lose its effectiveness? We don’t yet know what will happen.”

In the cocaine study, the rats self-administer cocaine for six hours a day for up to 11 days. Their behavior is much like that of a human addict.

"You might think that because they’re in there, they just take more, but they don’t just take more," Rebec said. "Like human addicts, they take the drug more and more rapidly and they want to get to it more and more quickly."

Withdrawal serves as an incubation period during which craving increases if it is activated by cues or other factors. “Something changes in the brain during that time to trigger the craving or make it more likely that you want the drug,” Rebec said. “That’s what ceftriaxone seems to be interfering with.”

Ceftriaxone is now in clinical trials on people with ALS, also known as Lou Gehrig’s disease, which has many mechanisms in common with other neurodegenerative diseases such as Huntington’s disease and Alzheimer’s.

Filed under cocaine cocaine use nucleus accumbens glutamate ceftriaxone animal model neuroscience science

84 notes

Engineered stem cell advance points toward treatment for ALS
Transplantation of human stem cells in an experiment conducted at the University of Wisconsin-Madison improved survival and muscle function in rats used to model ALS, a nerve disease that destroys nerve control of muscles, causing death by respiratory failure.
ALS (amyotrophic lateral sclerosis) is sometimes called “Lou Gehrig’s disease.” According to the ALS Association, the condition strikes about 5,600 Americans each year. Only about half of patients are alive three years after diagnosis. 
In work recently completed at the UW School of Veterinary Medicine, Masatoshi Suzuki, an assistant professor of comparative biosciences, and his colleagues used adult stem cells from human bone marrow and genetically engineered the cells to produce compounds called growth factors that can support damaged nerve cells.
The researchers then implanted the cells directly into the muscles of rats that were genetically modified to have symptoms and nerve damage resembling ALS.
In people, the motor neurons that trigger contraction of leg muscles are up to three feet long. These nerve cells are often the first to suffer damage in ALS, but it’s unclear where the deterioration begins. Many scientists have focused on the closer end of the neuron, at the spinal cord, but Suzuki observes that the distant end, where the nerve touches and activates the muscle, is often damaged early in the disease.
The connection between the neuron and the muscle, called the neuro-muscular junction, is where Suzuki focuses his attention. “This is one of our primary differences,” Suzuki says. “We know that the neuro-muscular junction is a site of early deterioration, and we suspected that it might be the villain in causing the nerve cell to die. It might not be an innocent victim of damage that starts elsewhere.”
Previously, Suzuki found that injecting glial cell line-derived neurotropic factor (GDNF) at the junction helped the neurons survive. The new study, published in the journal Molecular Therapy on May 28, expands the research to show a similar effect from a second compound, called vascular endothelial growth factor.
In the study, Suzuki found that using stem cells to deliver vascular endothelial growth factor alone improved survival and delayed the onset of disease and the decline in muscle function. That result mirrored his earlier study with GDNF.
But the real advance, Suzuki says, was finding an even better result from using stem cells that create both of these two growth factors. “In terms of disease-free time, overall survival, and sustaining muscle function, we found that delivering the combination was more powerful than either growth factor alone. The results would provide a new hope for people with this terrible disease.”
The new research was supported by the ALS Association, the National Institutes of Health, the University of Wisconsin Foundation, and other groups. 
The injected stem cells survived for at least nine weeks, but did not become neurons. Instead, their contribution was to secrete one or both growth factors. 
Originally, much of the enthusiasm for stem cells focused on the hope of replacing damaged cells, but Suzuki’s approach is different. “These motor nerve cells have extremely long connections, and replacing these cells is still challenging. But we aim to keep the neurons alive and healthy using the same growth factors that the body creates, and that’s what we have shown here.”
For the test, Suzuki used ALS model rats with a mutation that is found in a small percentage of ALS patients who have a genetic form of the disease. “This model has been accepted as the best test bed for ALS experiments,” says Suzuki. 
By using adult mesenchymal stem cells, the technique avoided the danger of tumor that can arise with the transplant of embryonic stem cells and related “do-anything” cells.  Importantly, mesenchymal stem cells have been already used in clinical trials for various human diseases.
In the future, Suzuki hopes to apply his approach by using clinical grade stem cells. “Because this is a fatal and untreatable disease, we hope this could enter a clinical trial relatively soon.”

Engineered stem cell advance points toward treatment for ALS

Transplantation of human stem cells in an experiment conducted at the University of Wisconsin-Madison improved survival and muscle function in rats used to model ALS, a nerve disease that destroys nerve control of muscles, causing death by respiratory failure.

ALS (amyotrophic lateral sclerosis) is sometimes called “Lou Gehrig’s disease.” According to the ALS Association, the condition strikes about 5,600 Americans each year. Only about half of patients are alive three years after diagnosis. 

In work recently completed at the UW School of Veterinary Medicine, Masatoshi Suzuki, an assistant professor of comparative biosciences, and his colleagues used adult stem cells from human bone marrow and genetically engineered the cells to produce compounds called growth factors that can support damaged nerve cells.

The researchers then implanted the cells directly into the muscles of rats that were genetically modified to have symptoms and nerve damage resembling ALS.

In people, the motor neurons that trigger contraction of leg muscles are up to three feet long. These nerve cells are often the first to suffer damage in ALS, but it’s unclear where the deterioration begins. Many scientists have focused on the closer end of the neuron, at the spinal cord, but Suzuki observes that the distant end, where the nerve touches and activates the muscle, is often damaged early in the disease.

The connection between the neuron and the muscle, called the neuro-muscular junction, is where Suzuki focuses his attention. “This is one of our primary differences,” Suzuki says. “We know that the neuro-muscular junction is a site of early deterioration, and we suspected that it might be the villain in causing the nerve cell to die. It might not be an innocent victim of damage that starts elsewhere.”

Previously, Suzuki found that injecting glial cell line-derived neurotropic factor (GDNF) at the junction helped the neurons survive. The new study, published in the journal Molecular Therapy on May 28, expands the research to show a similar effect from a second compound, called vascular endothelial growth factor.

In the study, Suzuki found that using stem cells to deliver vascular endothelial growth factor alone improved survival and delayed the onset of disease and the decline in muscle function. That result mirrored his earlier study with GDNF.

But the real advance, Suzuki says, was finding an even better result from using stem cells that create both of these two growth factors. “In terms of disease-free time, overall survival, and sustaining muscle function, we found that delivering the combination was more powerful than either growth factor alone. The results would provide a new hope for people with this terrible disease.”

The new research was supported by the ALS Association, the National Institutes of Health, the University of Wisconsin Foundation, and other groups. 

The injected stem cells survived for at least nine weeks, but did not become neurons. Instead, their contribution was to secrete one or both growth factors. 

Originally, much of the enthusiasm for stem cells focused on the hope of replacing damaged cells, but Suzuki’s approach is different. “These motor nerve cells have extremely long connections, and replacing these cells is still challenging. But we aim to keep the neurons alive and healthy using the same growth factors that the body creates, and that’s what we have shown here.”

For the test, Suzuki used ALS model rats with a mutation that is found in a small percentage of ALS patients who have a genetic form of the disease. “This model has been accepted as the best test bed for ALS experiments,” says Suzuki. 

By using adult mesenchymal stem cells, the technique avoided the danger of tumor that can arise with the transplant of embryonic stem cells and related “do-anything” cells.  Importantly, mesenchymal stem cells have been already used in clinical trials for various human diseases.

In the future, Suzuki hopes to apply his approach by using clinical grade stem cells. “Because this is a fatal and untreatable disease, we hope this could enter a clinical trial relatively soon.”

Filed under ALS Lou Gehrig’s disease animal model stem cells GDNF neurobiology neuroscience science

70 notes

Pitt team finds mechanism that causes noise-induced tinnitus and drug that can prevent it

An epilepsy drug shows promise in an animal model at preventing tinnitus from developing after exposure to loud noise, according to a new study by researchers at the University of Pittsburgh School of Medicine. The findings, reported this week in the early online version of the Proceedings of the National Academy of Sciences, reveal for the first time the reason the chronic and sometimes debilitating condition occurs.

image

An estimated 5 to 15 percent of Americans hear whistling, clicking, roaring and other phantom sounds of tinnitus, which typically is induced by exposure to very loud noise, said senior investigator Thanos Tzounopoulos, Ph.D., associate professor and member of the auditory research group in the Department of Otolaryngology, Pitt School of Medicine.

"There is no cure for it, and current therapies such as hearing aids don’t provide relief for many patients," he said. "We hope that by identifying the underlying cause, we can develop effective interventions."

The team focused on an area of the brain that is home to an important auditory center called the dorsal cochlear nucleus (DCN). From previous research in a mouse model, they knew that tinnitus is associated with hyperactivity of DCN cells — they fire impulses even when there is no actual sound to perceive. For the new experiments, they took a close look at the biophysical properties of tiny channels, called KCNQ channels, through which potassium ions travel in and out of the cell.

"We found that mice with tinnitus have hyperactive DCN cells because of a reduction in KCNQ potassium channel activity," Dr. Tzounopoulos said. "These KCNQ channels act as effective "brakes" that reduce excitability or activity of neuronal cells."

In the model, sedated mice are exposed in one ear to a 116-decibel sound, about the loudness of an ambulance siren, for 45 minutes, which was shown in previous work to lead to the development of tinnitus in 50 percent of exposed mice. Dr. Tzounopoulos and his team tested whether an FDA-approved epilepsy drug called retigabine, which specifically enhances KCNQ channel activity, could prevent the development of tinnitus. Thirty minutes into the noise exposure and twice daily for the next five days, half of the exposed group was given injections of retigabine.

Seven days after noise exposure, the team determined whether the mice had developed tinnitus by conducting startle experiments, in which a continuous, 70 dB tone is played for a period, then stopped briefly and then resumed before being interrupted with a much louder pulse. Mice with normal hearing perceive the gap in sounds and are aware something had changed, so they are less startled by the loud pulse than mice with tinnitus, which hear phantom noise that masks the moment of silence in between the background tones.

The researchers found that mice that were treated with retigabine immediately after noise exposure did not develop tinnitus. Consistent with previous studies, 50 percent of noise-exposed mice that were not treated with the drug exhibited behavioral signs of the condition.

"This is an important finding that links the biophysical properties of a potassium channel with the perception of a phantom sound," Dr. Tzounopoulos said. "Tinnitus is a channelopathy, and these KCNQ channels represent a novel target for developing drugs that block the induction of tinnitus in humans."

The KCNQ family is comprised of five different subunits, four of which are sensitive to retigabine. He and his collaborators aim to develop a drug that is specific for the two KCNQ subunits involved in tinnitus to minimize the potential for side effects.

"Such a medication could be a very helpful preventive strategy for soldiers and other people who work in situations where exposure to very loud noise is likely," Dr. Tzounopoulos said. "It might also be useful for other conditions of phantom perceptions, such as pain in a limb that has been amputated."

(Source: eurekalert.org)

Filed under tinnitus noise exposure potassium channels dorsal cochlear nucleus animal model neuroscience science

81 notes

‘Should I stay or should I go?’ CSHL scientists link brain cell types to behavior
You are sitting on your couch flipping through TV channels trying to decide whether to stay put or get up for a snack. Such everyday decisions about whether to “stay” or to “go” are supported by a brain region called the anterior cingulate cortex (ACC), which is part of the prefrontal cortex. Neuroscientists from Cold Spring Harbor Laboratory (CSHL) have now identified key circuit elements that contribute to such decisions in the ACC.
CSHL Associate Professor Adam Kepecs and his team publish results that, for the first time, link specific brain cell types to a particular behavior pattern in mice – a “stay or go” pattern called foraging behavior. The paper, published online in Nature, shows that the firing of two distinct types of inhibitory neurons, known as somatostatin (SOM) and parvalbumin (PV) neurons, has a strong correlation with the start and end of a period of foraging behavior.
Linking specific neuronal types to well-defined behaviors has proved extremely difficult. “There’s a big gap in our knowledge between our understanding of neuron types in terms of their physical location and their place in any given neural circuit, and what these neurons actually do during behavior,” says Kepecs.
Part of the problem is the technical challenge of doing these studies in live, freely behaving mice. Key to solving that problem is a mouse model developed in the laboratory of CSHL Professor Z. Josh Huang. The mouse has a genetic modification that allows investigators to target a specific population of neurons with any protein of interest.
Kepecs’ group, led by postdocs Duda Kvitsiani and Sachin Ranade, used this mouse to label specific neuron types in the ACC with a light-activated protein – a technique known as optogenetic tagging. Whenever they shone light onto the brains of the mice they were recording from, only the tagged PV and SOM neurons responded promptly with a ‘spike’ in their activity, enabling the researchers to pick them out from the vast diversity of cellular responses seen at any given moment.
The team recorded neural activity in the ACC of these mice while they engaged in foraging behavior. They discovered that the PV and SOM inhibitory neurons responded around the time of the foraging decisions — in other words whether to stay and drink or go and explore elsewhere. Specifically, when the mice entered an area where they could collect a water reward, SOM inhibitory neurons shut down and entered a period of low-level activity, thereby opening a ‘gate’ for information to flow in to ACC. When the mice decided to leave that area and look elsewhere, PV inhibitory neurons fired and abruptly reset cell activity.
“The brain is complex and continuously active, so it makes sense that these two types of inhibitory interneurons define the boundaries of a behavior such as foraging, opening and then closing the ‘gate’ within a particular neural circuit through changes in their activity,” says Kepecs.
This is an important advance, addressing a problem in behavioral neuroscience that scientists call “the cortical response zoo.” When researchers record neural activity in cortex during behavior, and they don’t know which type of neurons they are recording from, a bewildering array of responses is seen. This greatly complicates the task of interpretation. Hence the significance of the Kepecs team’s results, for the first time showing that specific cortical neuron types can be linked to specific aspects of behavior.
“We think about the brain and behavior in terms of levels; what the cell types are and the circuits or networks they form; which regions of the brain they are in; and what behavior is modulated by them,” explains Kepecs. “By observing that the activity of specific cell types in the prefrontal cortex is correlated with a behavioral period, we have identified a link between these levels.”

‘Should I stay or should I go?’ CSHL scientists link brain cell types to behavior

You are sitting on your couch flipping through TV channels trying to decide whether to stay put or get up for a snack. Such everyday decisions about whether to “stay” or to “go” are supported by a brain region called the anterior cingulate cortex (ACC), which is part of the prefrontal cortex. Neuroscientists from Cold Spring Harbor Laboratory (CSHL) have now identified key circuit elements that contribute to such decisions in the ACC.

CSHL Associate Professor Adam Kepecs and his team publish results that, for the first time, link specific brain cell types to a particular behavior pattern in mice – a “stay or go” pattern called foraging behavior. The paper, published online in Nature, shows that the firing of two distinct types of inhibitory neurons, known as somatostatin (SOM) and parvalbumin (PV) neurons, has a strong correlation with the start and end of a period of foraging behavior.

Linking specific neuronal types to well-defined behaviors has proved extremely difficult. “There’s a big gap in our knowledge between our understanding of neuron types in terms of their physical location and their place in any given neural circuit, and what these neurons actually do during behavior,” says Kepecs.

Part of the problem is the technical challenge of doing these studies in live, freely behaving mice. Key to solving that problem is a mouse model developed in the laboratory of CSHL Professor Z. Josh Huang. The mouse has a genetic modification that allows investigators to target a specific population of neurons with any protein of interest.

Kepecs’ group, led by postdocs Duda Kvitsiani and Sachin Ranade, used this mouse to label specific neuron types in the ACC with a light-activated protein – a technique known as optogenetic tagging. Whenever they shone light onto the brains of the mice they were recording from, only the tagged PV and SOM neurons responded promptly with a ‘spike’ in their activity, enabling the researchers to pick them out from the vast diversity of cellular responses seen at any given moment.

The team recorded neural activity in the ACC of these mice while they engaged in foraging behavior. They discovered that the PV and SOM inhibitory neurons responded around the time of the foraging decisions — in other words whether to stay and drink or go and explore elsewhere. Specifically, when the mice entered an area where they could collect a water reward, SOM inhibitory neurons shut down and entered a period of low-level activity, thereby opening a ‘gate’ for information to flow in to ACC. When the mice decided to leave that area and look elsewhere, PV inhibitory neurons fired and abruptly reset cell activity.

“The brain is complex and continuously active, so it makes sense that these two types of inhibitory interneurons define the boundaries of a behavior such as foraging, opening and then closing the ‘gate’ within a particular neural circuit through changes in their activity,” says Kepecs.

This is an important advance, addressing a problem in behavioral neuroscience that scientists call “the cortical response zoo.” When researchers record neural activity in cortex during behavior, and they don’t know which type of neurons they are recording from, a bewildering array of responses is seen. This greatly complicates the task of interpretation. Hence the significance of the Kepecs team’s results, for the first time showing that specific cortical neuron types can be linked to specific aspects of behavior.

“We think about the brain and behavior in terms of levels; what the cell types are and the circuits or networks they form; which regions of the brain they are in; and what behavior is modulated by them,” explains Kepecs. “By observing that the activity of specific cell types in the prefrontal cortex is correlated with a behavioral period, we have identified a link between these levels.”

Filed under anterior cingulate cortex prefrontal cortex foraging behavior animal model neurons neuroscience science

free counters