Neuroscience

Articles and news from the latest research reports.

247 notes

Nature or nurture? It’s all about the message

Were Albert Einstein and Leonardo da Vinci born brilliant or did they acquire their intelligence through effort?
No one knows for sure, but telling people the latter – that hard work trumps genes – causes instant changes in the brain and may make them more willing to strive for success, indicates a new study from Michigan State University.
The findings suggest the human brain is more receptive to the message that intelligence comes from the environment, regardless of whether it’s true. And this simple message, said lead investigator Hans Schroder, may ultimately prompt us to work harder.
“Giving people messages that encourage learning and motivation may promote more efficient performance,” said Schroder, a doctoral student in clinical psychology whose work is funded by the National Science Foundation. “In contrast, telling people that intelligence is genetically fixed may inadvertently hamper learning.”
In past research by Stanford University psychologist Carol Dweck, elementary students performing a task were either praised for their intelligence (“You’re so smart!”) or for their effort (“You worked really hard!”) after correct responses. As the task became harder, children in the first group performed worse after their mistakes compared to the group that had heard effort was important.
The MSU study, which appears online in the journal Biological Psychology, offers what could be the first physiological evidence to support those findings, in the form of a positive brain response. “These subtle messages seem to have a big impact, and now we can see they have an immediate impact on how the brain handles information about performance,” Schroder said.
For the study, two groups of participants read different articles. One article reported that intelligence is largely genetic, while the other said the brilliance of da Vinci and Einstein was “probably due to a challenging environment. Their genius had little to do with genetic structure.”
Participants were instructed to remember the main points of the article and completed a simple computer task while their brain activity was recorded. The findings, in a nutshell:
The group that read intelligence was primarily genetic paid more attention to their responses, as if they were more concerned with their performance. This extra attention, however, did not relate to performance on trials after errors, suggesting a disconnect between brain and behavior.
In contrast, those who had read that intelligence was due to a challenging environment showed a more efficient brain response after they made a mistake, possibly because they believed they could do better on the next trial. The more attention these participants paid to mistakes, the faster their responses were on the next trial.
The study does not weigh in on the age-old “nature vs. nurture” debate, Schroder noted. Rather, it investigates the messages about the nature of abilities people are exposed to on a regular basis, from a teacher comforting a student (“It’s OK, not everyone can be a math person.”) to the sports announcer commenting on a player’s skill (“Wow, what a natural!”). These messages are thought to contribute to the attitudes or “mindsets” people hold about their intelligence and abilities.
The research started as part of Schroder’s honors thesis as an undergraduate at MSU working in the Clinical Psychophysiology Lab directed by Jason Moser, MSU assistant professor. Moser co-authored the study along with Tim Moran, an MSU graduate student in cognitive psychology, and Brent Donnellan, a former MSU professor who now works at Texas A&M University.
As an undergraduate and graduate student, Schroder has already co-written nine papers that have appeared in academic journals, including five as lead author. His work is supported by a three-year grant from the NSF’s Graduate Research Fellowship Program.

Nature or nurture? It’s all about the message

Were Albert Einstein and Leonardo da Vinci born brilliant or did they acquire their intelligence through effort?

No one knows for sure, but telling people the latter – that hard work trumps genes – causes instant changes in the brain and may make them more willing to strive for success, indicates a new study from Michigan State University.

The findings suggest the human brain is more receptive to the message that intelligence comes from the environment, regardless of whether it’s true. And this simple message, said lead investigator Hans Schroder, may ultimately prompt us to work harder.

“Giving people messages that encourage learning and motivation may promote more efficient performance,” said Schroder, a doctoral student in clinical psychology whose work is funded by the National Science Foundation. “In contrast, telling people that intelligence is genetically fixed may inadvertently hamper learning.”

In past research by Stanford University psychologist Carol Dweck, elementary students performing a task were either praised for their intelligence (“You’re so smart!”) or for their effort (“You worked really hard!”) after correct responses. As the task became harder, children in the first group performed worse after their mistakes compared to the group that had heard effort was important.

The MSU study, which appears online in the journal Biological Psychology, offers what could be the first physiological evidence to support those findings, in the form of a positive brain response. “These subtle messages seem to have a big impact, and now we can see they have an immediate impact on how the brain handles information about performance,” Schroder said.

For the study, two groups of participants read different articles. One article reported that intelligence is largely genetic, while the other said the brilliance of da Vinci and Einstein was “probably due to a challenging environment. Their genius had little to do with genetic structure.”

Participants were instructed to remember the main points of the article and completed a simple computer task while their brain activity was recorded. The findings, in a nutshell:

  • The group that read intelligence was primarily genetic paid more attention to their responses, as if they were more concerned with their performance. This extra attention, however, did not relate to performance on trials after errors, suggesting a disconnect between brain and behavior.
  • In contrast, those who had read that intelligence was due to a challenging environment showed a more efficient brain response after they made a mistake, possibly because they believed they could do better on the next trial. The more attention these participants paid to mistakes, the faster their responses were on the next trial.

The study does not weigh in on the age-old “nature vs. nurture” debate, Schroder noted. Rather, it investigates the messages about the nature of abilities people are exposed to on a regular basis, from a teacher comforting a student (“It’s OK, not everyone can be a math person.”) to the sports announcer commenting on a player’s skill (“Wow, what a natural!”). These messages are thought to contribute to the attitudes or “mindsets” people hold about their intelligence and abilities.

The research started as part of Schroder’s honors thesis as an undergraduate at MSU working in the Clinical Psychophysiology Lab directed by Jason Moser, MSU assistant professor. Moser co-authored the study along with Tim Moran, an MSU graduate student in cognitive psychology, and Brent Donnellan, a former MSU professor who now works at Texas A&M University.

As an undergraduate and graduate student, Schroder has already co-written nine papers that have appeared in academic journals, including five as lead author. His work is supported by a three-year grant from the NSF’s Graduate Research Fellowship Program.

Filed under intelligence mindsets brain activity cognition psychology neuroscience science

121 notes

Longitudinal study explores white matter damage, cognition after traumatic axonal injury
Traumatic Axonal Injury is a form of traumatic brain injury that can have detrimental effects on the integrity of the brain’s white matter and lead to cognitive impairments. A new study from the Center for BrainHealth at The University of Texas at Dallas investigated white matter damage in the acute and chronic stages of a traumatic axonal injury in an effort to better understand what long-term damage may result.
The study, published online July 21 in the Journal of Neurotrauma, looked at 13 patients ages 16 to 60 with mild to severe brain injuries from the intensive care unit at a Level I trauma center. This group was matched to a cohort of 10 healthy individuals resembling the age, gender, and ethnicity of the patients. White matter integrity was measured using diffusion tensor imaging (DTI) in the acute stage of injury, at day one, and again at the chronic stage, seven months post-injury. In addition, neuropsychological assessments measured cognitive performance including processing speed, attention, learning and memory at both stages after injury.
“We intended to determine whether DTI could not only identify early compromise to white matter, but also demonstrate an association with functional and neuropsychological outcomes months post-injury,” said Carlos Marquez de la Plata, Ph.D., Assistant Director of Rehabilitation Research at Pate Rehabilitation in Dallas, Texas.
The study’s findings suggest DTI may be used to detect meaningful changes in white matter as early as one day after a traumatic brain injury. White matter integrity measured at the chronic stage was also found to significantly correlate with cognitive processing speed.
“On the first day after the injury, we found white matter integrity was compromised due to swelling in the brain,“ said the study’s lead author Alison Perez. “As the swelling subsided over time and the brain began to repair itself, we found that many of the damaged neurons that were unable to repair themselves began to die off, which appears to slow the speed of cognitive processing.”
Interestingly, the degree of white matter compromise detected early after injury was associated with markers of injury severity such as the number of days in the intensive care unit and hospital, but not to outcomes months later. 
At seven months post-injury, many of the patients’ cognitive performance improved including processing speed, divided attention, and short and long-term memory. In addition, patients with better white matter integrity at the chronic stage had the fastest processing speed.
By studying the long-term effects of a traumatic axonal injury at both the acute and chronic stages, researchers hope to assist in the advancement of future assessment and treatment options after a traumatic brain injury.

Longitudinal study explores white matter damage, cognition after traumatic axonal injury

Traumatic Axonal Injury is a form of traumatic brain injury that can have detrimental effects on the integrity of the brain’s white matter and lead to cognitive impairments. A new study from the Center for BrainHealth at The University of Texas at Dallas investigated white matter damage in the acute and chronic stages of a traumatic axonal injury in an effort to better understand what long-term damage may result.

The study, published online July 21 in the Journal of Neurotrauma, looked at 13 patients ages 16 to 60 with mild to severe brain injuries from the intensive care unit at a Level I trauma center. This group was matched to a cohort of 10 healthy individuals resembling the age, gender, and ethnicity of the patients. White matter integrity was measured using diffusion tensor imaging (DTI) in the acute stage of injury, at day one, and again at the chronic stage, seven months post-injury. In addition, neuropsychological assessments measured cognitive performance including processing speed, attention, learning and memory at both stages after injury.

“We intended to determine whether DTI could not only identify early compromise to white matter, but also demonstrate an association with functional and neuropsychological outcomes months post-injury,” said Carlos Marquez de la Plata, Ph.D., Assistant Director of Rehabilitation Research at Pate Rehabilitation in Dallas, Texas.

The study’s findings suggest DTI may be used to detect meaningful changes in white matter as early as one day after a traumatic brain injury. White matter integrity measured at the chronic stage was also found to significantly correlate with cognitive processing speed.

“On the first day after the injury, we found white matter integrity was compromised due to swelling in the brain,“ said the study’s lead author Alison Perez. “As the swelling subsided over time and the brain began to repair itself, we found that many of the damaged neurons that were unable to repair themselves began to die off, which appears to slow the speed of cognitive processing.”

Interestingly, the degree of white matter compromise detected early after injury was associated with markers of injury severity such as the number of days in the intensive care unit and hospital, but not to outcomes months later. 

At seven months post-injury, many of the patients’ cognitive performance improved including processing speed, divided attention, and short and long-term memory. In addition, patients with better white matter integrity at the chronic stage had the fastest processing speed.

By studying the long-term effects of a traumatic axonal injury at both the acute and chronic stages, researchers hope to assist in the advancement of future assessment and treatment options after a traumatic brain injury.

Filed under white matter axonal injury diffusion tensor imaging TBI neuroscience science

119 notes

Researchers unlock new mechanism in pain management
It’s in the brain where we perceive the unpleasant sensations of pain, and researchers have long been examining how calcium channels in the brain and peripheral nervous system contribute to the development of chronic pain conditions.
Neuroscientist Gerald Zamponi, PhD, and his team at the University of Calgary’s Hotchkiss Brain Institute have discovered a new mechanism that can reverse chronic pain. Using an animal model, their research has found that pain signals in nerve cells can be shut off by interfering with the communication of a specific enzyme with calcium channels, a group of important proteins that control nerve impulses.
Their Canadian Institutes of Health Research-funded study was published in the September issue of Neuron — one of the most influential journals in the field of neuroscience.
Zamponi is now applying his research and partnering with the Centre for Drug Research and Development (CDRD) in Vancouver to develop a drug that could one day improve the lives of those with inflammatory pain such as arthritis, irritable bowel disease or neuropathic pain. Their approach may be able to reduce the pain associated with these conditions.
Opening the door to new treatments
“Chronic pain can be a debilitating condition that affects many people and is often poorly controlled by currently available treatments.  Therefore, new treatment avenues are needed. Our discovery opens the door towards new treatments, and based on the data that we have so far, it is a viable strategy,” says Zamponi, the lead author of the study and senior associate dean of research at the Cumming School of Medicine.
With CDRD, Zamponi and his team are screening more than 100,000 molecules in hopes of finding one that would stop the enzyme from communicating with the calcium channel. If they can isolate the right molecule, they can potentially turn it into a drug. So far, they have already found two viable molecules that have been validated by his group as painkillers in animals.
Promising innovation from basic research
Commercialization of the project Zamponi and his team are working on is one of six funded through the competition of the Alberta/Pfizer Translational Research Fund Opportunity. “AIHS is delighted that the strong partnership created with Pfizer, Western Economic Diversification, and Alberta Innovation and Advanced Education is helping to develop promising innovations from basic research into technologies, drugs, and tools to improve health,” says Dr. Cy Frank, president and CEO of Alberta Innovates – Health Solutions.
The Alberta/Pfizer Translational Research Fund Opportunity is a partnership between Pfizer Canada Inc., Alberta Innovates – Health Solutions, Alberta’s Ministry of Innovation and Advanced Education, and Western Economic Diversification Canada. This partnership will provide opportunities to focus on the development and commercialization of innovations in health. More than $3.25 million has been committed to identify and support promising health-care innovations with market potential.

Researchers unlock new mechanism in pain management

It’s in the brain where we perceive the unpleasant sensations of pain, and researchers have long been examining how calcium channels in the brain and peripheral nervous system contribute to the development of chronic pain conditions.

Neuroscientist Gerald Zamponi, PhD, and his team at the University of Calgary’s Hotchkiss Brain Institute have discovered a new mechanism that can reverse chronic pain. Using an animal model, their research has found that pain signals in nerve cells can be shut off by interfering with the communication of a specific enzyme with calcium channels, a group of important proteins that control nerve impulses.

Their Canadian Institutes of Health Research-funded study was published in the September issue of Neuronone of the most influential journals in the field of neuroscience.

Zamponi is now applying his research and partnering with the Centre for Drug Research and Development (CDRD) in Vancouver to develop a drug that could one day improve the lives of those with inflammatory pain such as arthritis, irritable bowel disease or neuropathic pain. Their approach may be able to reduce the pain associated with these conditions.

Opening the door to new treatments

“Chronic pain can be a debilitating condition that affects many people and is often poorly controlled by currently available treatments.  Therefore, new treatment avenues are needed. Our discovery opens the door towards new treatments, and based on the data that we have so far, it is a viable strategy,” says Zamponi, the lead author of the study and senior associate dean of research at the Cumming School of Medicine.

With CDRD, Zamponi and his team are screening more than 100,000 molecules in hopes of finding one that would stop the enzyme from communicating with the calcium channel. If they can isolate the right molecule, they can potentially turn it into a drug. So far, they have already found two viable molecules that have been validated by his group as painkillers in animals.

Promising innovation from basic research

Commercialization of the project Zamponi and his team are working on is one of six funded through the competition of the Alberta/Pfizer Translational Research Fund Opportunity. “AIHS is delighted that the strong partnership created with Pfizer, Western Economic Diversification, and Alberta Innovation and Advanced Education is helping to develop promising innovations from basic research into technologies, drugs, and tools to improve health,” says Dr. Cy Frank, president and CEO of Alberta Innovates – Health Solutions.

The Alberta/Pfizer Translational Research Fund Opportunity is a partnership between Pfizer Canada Inc., Alberta Innovates – Health Solutions, Alberta’s Ministry of Innovation and Advanced Education, and Western Economic Diversification Canada. This partnership will provide opportunities to focus on the development and commercialization of innovations in health. More than $3.25 million has been committed to identify and support promising health-care innovations with market potential.

Filed under pain chronic pain USP5 calcium channel neuroscience science

106 notes

(Image caption: A thalamocortical, or TC neuron labeled with fluorescent dye, as used in Dr. Augustinaite’s study. The image shows a voltage recording device, at bottom left, entering the yellow cell body, and a stimulation device, at top, reaching the dendrites. Color in this image shows the depth in the slice.)
To See or Not to See
The brain is a complicated network of small units called neurons, all working to carry information from the outside world, create an internal model, and generate a response. Neurons sense a signal through branching dendrites, carry this signal to the cell body, and send it onwards through a long axon to signal the next neuron. However, neurons can function in many different ways; some of which researchers are still exploring. Some signals that the dendrites receive do not continue to the next neuron; instead they seem to change the way that the neuron handles the subsequent signals. This could help neurons function as part of a large network, but researchers still have many questions. Dr. Sigita Augustinaite, a researcher in the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University, suggested one mechanism explaining how neurons help the network function. Her findings, part of collaboration between the University of Oslo and OIST, were published August 13, 2014 as the cover article in The Journal of Neuroscience.
Dr. Augustinaite studies the visual pathway, where signals from the retina are sent to the visual cortex, where the brain interprets signals from the eye. Between the eye and the visual cortex, the signals must pass through the visual thalamus, that is, through thalamocortical, or TC neurons. These neurons can switch between a “sleeping” state and a “waking” state depending on input they receive from neurons and other brain areas. When an animal is awake, TC neurons transmit the incoming retinal signals on to the cortex, but when the animal is asleep, the neurons block retinal signals.
The visual cortex also sends a massive input back to TC neurons to control retinal signals traveling through the thalamus. But Dr. Augustinaite says that the suggested mechanisms of this control bring more questions than answers. To understand more, she conducted experiments in acute brain slices, small pieces of brain tissue where neurons stay alive and maintain their physiological properties. She added glutamate to dendrites far from the cell body to emulate a feedback signal from the visual cortex. Then she measured the neuron’s response, shown as a voltage difference between inside and outside of the membrane.
Dr. Augustinaite found that stimulating the neurons in this way depolarizes their membranes, creating something called NMDA spike/plateau potentials. If strong enough, depolarization can cause a neuron to fire an action potential, which travels through the axon to activate other neurons. Action potentials look like a sharp, one-millisecond increase in membrane voltage, and they transmit signals from retina to cortex. But if NMDA spike/plateaus induces action potentials, signals from the cortex and signals from the retina would be indistinguishable. With her experiments, Dr. Augustinaite showed that the NMDA spike/plateau potentials in TC neurons do not trigger action potentials. Instead, they lift the voltage of the membrane, changing the neuron’s properties for few hundred milliseconds, creating conditions for reliable signal transmission from retina to cortex.
“The research gives, for the first time, a clear view on what dendritic potentials are good for,” explained Prof. Bernd Kuhn, who leads the lab where Dr. Augustinaite works. “It points directly to the mechanism,” he concluded. Showing how dendritic plateaus function is just one important step toward understanding how neurons function as a network. “This mechanism could also be used in many other neuronal circuits, where one input regulates how another input moves through the network,” Dr. Augustinaite said. “This mechanism is an exciting logical element in the neuronal network, but just the start of putting the puzzle together.”

(Image caption: A thalamocortical, or TC neuron labeled with fluorescent dye, as used in Dr. Augustinaite’s study. The image shows a voltage recording device, at bottom left, entering the yellow cell body, and a stimulation device, at top, reaching the dendrites. Color in this image shows the depth in the slice.)

To See or Not to See

The brain is a complicated network of small units called neurons, all working to carry information from the outside world, create an internal model, and generate a response. Neurons sense a signal through branching dendrites, carry this signal to the cell body, and send it onwards through a long axon to signal the next neuron. However, neurons can function in many different ways; some of which researchers are still exploring. Some signals that the dendrites receive do not continue to the next neuron; instead they seem to change the way that the neuron handles the subsequent signals. This could help neurons function as part of a large network, but researchers still have many questions. Dr. Sigita Augustinaite, a researcher in the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University, suggested one mechanism explaining how neurons help the network function. Her findings, part of collaboration between the University of Oslo and OIST, were published August 13, 2014 as the cover article in The Journal of Neuroscience.

Dr. Augustinaite studies the visual pathway, where signals from the retina are sent to the visual cortex, where the brain interprets signals from the eye. Between the eye and the visual cortex, the signals must pass through the visual thalamus, that is, through thalamocortical, or TC neurons. These neurons can switch between a “sleeping” state and a “waking” state depending on input they receive from neurons and other brain areas. When an animal is awake, TC neurons transmit the incoming retinal signals on to the cortex, but when the animal is asleep, the neurons block retinal signals.

The visual cortex also sends a massive input back to TC neurons to control retinal signals traveling through the thalamus. But Dr. Augustinaite says that the suggested mechanisms of this control bring more questions than answers. To understand more, she conducted experiments in acute brain slices, small pieces of brain tissue where neurons stay alive and maintain their physiological properties. She added glutamate to dendrites far from the cell body to emulate a feedback signal from the visual cortex. Then she measured the neuron’s response, shown as a voltage difference between inside and outside of the membrane.

Dr. Augustinaite found that stimulating the neurons in this way depolarizes their membranes, creating something called NMDA spike/plateau potentials. If strong enough, depolarization can cause a neuron to fire an action potential, which travels through the axon to activate other neurons. Action potentials look like a sharp, one-millisecond increase in membrane voltage, and they transmit signals from retina to cortex. But if NMDA spike/plateaus induces action potentials, signals from the cortex and signals from the retina would be indistinguishable. With her experiments, Dr. Augustinaite showed that the NMDA spike/plateau potentials in TC neurons do not trigger action potentials. Instead, they lift the voltage of the membrane, changing the neuron’s properties for few hundred milliseconds, creating conditions for reliable signal transmission from retina to cortex.

“The research gives, for the first time, a clear view on what dendritic potentials are good for,” explained Prof. Bernd Kuhn, who leads the lab where Dr. Augustinaite works. “It points directly to the mechanism,” he concluded. Showing how dendritic plateaus function is just one important step toward understanding how neurons function as a network. “This mechanism could also be used in many other neuronal circuits, where one input regulates how another input moves through the network,” Dr. Augustinaite said. “This mechanism is an exciting logical element in the neuronal network, but just the start of putting the puzzle together.”

Filed under neurons action potentials neural circuits dendritic integration visual cortex neuroscience science

78 notes

Seizures and sudden death: When SUMO ‘wrestles’ potassium channels

A gene crucial for brain and heart development may also be associated with sudden unexplained death in epilepsy (SUDEP), the most common cause of early mortality in epilepsy patients.

image

Scientists at The University of Texas MD Anderson Cancer Center have created a new animal model for SUDEP and have shown that mice who have a partial deficiency of the gene SENP2 (Sentrin/SUMO-specific protease 2) are more likely to develop spontaneous seizures and sudden death. The finding occurred when observing mice originally bred for studying a link between SENP2 deficiency and cancer.

"SENP2 is highly present in the hippocampus, a critical brain region for seizure genesis," said Edward Yeh, M.D., chair of cardiology at MD Anderson. "Understanding the genetic basis for SUDEP is crucial given that the rate of sudden death in epilepsy patients is 20-fold that of the general population, with SUDEP the most common epilepsy-related cause of death."

Yeh’s findings were published in this month’s issue of Neuron.

Although it’s not yet known what causes SUDEP in humans, inactivation of potassium channels genes have been linked to SUDEP in animal models. Potassium channels are found in most cell types and control a large variety of cell functions.

"These animal models demonstrated an important connection between the brain and heart. However, it remains unclear whether seizure and sudden death are two separate manifestations of potassium channel deficiency in the brain and the heart, or whether seizures predispose the heart to lethal cardiac arrhythmia," said Yeh.

The study revealed that when SENP2 was deficient in the brain, seizures activated a part of the nervous system responsible for regulating the heart’s electrical system. This resulted in a phenomenon known as atrioventricular conduction block, which effectively slowed down and then stopped the heart.

Yeh’s team observed that the SENP2-deficient mice appeared normal at birth, but by 6 to 8 weeks, experienced convulsive seizures, and then sudden death. He believes the reason may lie with protein modifiers called SUMO. SENP2 deficiency results in a process known as hyper-SUMOylation, which dramatically impacts potassium channels in the brain.

"One of the channels, Kv7, is significantly diminished or ‘closed’ due to the lack of SENP2," said Yeh. "In mice this led to seizures and cardiac arrest."

In humans, the good news is that an FDA-approved drug, retigabine works by “opening” the Kv7 channel. The therapy was developed for treating partial-onset seizures. The findings in Yeh’s new mouse model clearly demonstrate a previously unknown cause of SUDEP, which may open up new opportunities for study and treatment in the future.

(Source: eurekalert.org)

Filed under epilepsy SENP2 hippocampus potassium channel epileptic seizures neuroscience science

396 notes

Why HIV patients develop dementia

Since the introduction of the combination anti-retroviral therapy (cART) in the mid-90s, the life expectancy of HIV patients has significantly improved. As a result, long-term complications are becoming more relevant: almost every second HIV patient is affected by neurocognitive disorders, which can lead to dementia. It has not as yet been fully understood how these disorders occur. Researchers from Bochum have now successfully identified mechanisms how infected cells can activate brain-specific immune cells which subsequently display harmful behaviour and lead to the destruction of neurons. These findings may help develop biomarkers to identify risk patients and to make a therapeutic strategy possible in the long term. The study was published in the trade journal “Experimental Neurology”.

Immune cells in the brain under suspicion
“HIV-associated neurocognitive disorders” (HAND) include disorders of the cognitive functions, motor capacities as well as behavioural changes. How exactly HAND occur has not, as yet, been fully understood. “Scientists assume that HIV is harmful to cells directly and that it also triggers indirect mechanisms that lead to nerve cell damage,” explains Dr Simon Faissner (RUB clinic for neurology, St. Josef-Hospital). The researchers strongly suspect that, once activated in the brain and the spinal cord, immune cells keep up a chronic inflammation level which then results in the destruction of nerve cells. An immune activation in peripheral tissue as well as therapeutic consequences may likewise contribute to nerve cell damage in the brain.
First steps of HIV infection are sufficient
The HI virus overcomes the blood-brain barrier hitchhiking on infected immune cells, the monocytes and probably the T cells. The researchers from Bochum tested the hypothesis that HIV-infected monocytes activate specific immune cells in the brain, the so-called microglial cells. These cells, in turn, respond by releasing harmful substances, such as reactive oxygen metabolites and inflammatory signalling molecules, i.e. cytokines. To test this hypothesis, the researchers developed a cell culture system in which they initially examined the effect of HIV-infected monocytes on microglial cells. The researchers simulated the individual steps of HIV infection and measured the concentration of the cytokines released at each stage. Thus, they were able to demonstrate that releasing the viral RNA in the monocytes was a sufficient trigger for maximal microglial activation. Subsequent infection phases – reverse transcription into DNA and the resulting formation of HIV proteins – did not augment activation any further.
Released substances result in neuronal cell death
In the second step, they analysed nerve cells from rat brains to determine if the substances released by the microglial cells could lead to cell death. Compared with the control group, the amount of cell death was indeed twice as high. Studies of liquor cerebrospinalis received from HIV-infected patients have shown a positive correlation with marker of neuronal degeneration in patients who did not as yet present any neurocognitive disorders.
Detailed understanding necessary for therapeutic strategies
“Thanks to our research, we have gained a better understanding of the mechanisms of HIV-associated neurodegeneration,” concludes Prof Dr Andrew Chan. “These results are likely to contribute to HAND biomarkers becoming established. In the long term, these data may be used to develop therapeutic strategies aiming at retarding HAND progression in HIV-infected patients.” Starting points may include activation of microglial cells – a method that is applied in other autoimmune diseases of the central nervous system, for example in multiple sclerosis.
Start-up through FoRUM funds
The research, which was initiated following a collaboration between clinics for neurology and dermatology, St. Josef Hospital, as well as the Department for Molecular and Medical Virology, has been made possible through start-up funding provided by the Faculty of Medicine at Ruhr-Universität (FoRUM). The collaboration has evolved into an international consortium of clinics and basic research organisations in Bochum, Langen, Strasbourg and Mailand. One objective of the follow-up study, for which an application for EU funds is pending, is going to be an in-depth analysis of inflammatory processes in the central nervous system. The researchers will attempt to inhibit inflammatory processes with different drugs. They are, moreover, planning to study direct cell-cell interaction by means of state-of-the-art microscopy, in collaboration with the University of Strasbourg.
(Image credit: Mehau Kulyk/Science Photo Library)

Why HIV patients develop dementia

Since the introduction of the combination anti-retroviral therapy (cART) in the mid-90s, the life expectancy of HIV patients has significantly improved. As a result, long-term complications are becoming more relevant: almost every second HIV patient is affected by neurocognitive disorders, which can lead to dementia. It has not as yet been fully understood how these disorders occur. Researchers from Bochum have now successfully identified mechanisms how infected cells can activate brain-specific immune cells which subsequently display harmful behaviour and lead to the destruction of neurons. These findings may help develop biomarkers to identify risk patients and to make a therapeutic strategy possible in the long term. The study was published in the trade journal “Experimental Neurology”.

Immune cells in the brain under suspicion

“HIV-associated neurocognitive disorders” (HAND) include disorders of the cognitive functions, motor capacities as well as behavioural changes. How exactly HAND occur has not, as yet, been fully understood. “Scientists assume that HIV is harmful to cells directly and that it also triggers indirect mechanisms that lead to nerve cell damage,” explains Dr Simon Faissner (RUB clinic for neurology, St. Josef-Hospital). The researchers strongly suspect that, once activated in the brain and the spinal cord, immune cells keep up a chronic inflammation level which then results in the destruction of nerve cells. An immune activation in peripheral tissue as well as therapeutic consequences may likewise contribute to nerve cell damage in the brain.

First steps of HIV infection are sufficient

The HI virus overcomes the blood-brain barrier hitchhiking on infected immune cells, the monocytes and probably the T cells. The researchers from Bochum tested the hypothesis that HIV-infected monocytes activate specific immune cells in the brain, the so-called microglial cells. These cells, in turn, respond by releasing harmful substances, such as reactive oxygen metabolites and inflammatory signalling molecules, i.e. cytokines. To test this hypothesis, the researchers developed a cell culture system in which they initially examined the effect of HIV-infected monocytes on microglial cells. The researchers simulated the individual steps of HIV infection and measured the concentration of the cytokines released at each stage. Thus, they were able to demonstrate that releasing the viral RNA in the monocytes was a sufficient trigger for maximal microglial activation. Subsequent infection phases – reverse transcription into DNA and the resulting formation of HIV proteins – did not augment activation any further.

Released substances result in neuronal cell death

In the second step, they analysed nerve cells from rat brains to determine if the substances released by the microglial cells could lead to cell death. Compared with the control group, the amount of cell death was indeed twice as high. Studies of liquor cerebrospinalis received from HIV-infected patients have shown a positive correlation with marker of neuronal degeneration in patients who did not as yet present any neurocognitive disorders.

Detailed understanding necessary for therapeutic strategies

“Thanks to our research, we have gained a better understanding of the mechanisms of HIV-associated neurodegeneration,” concludes Prof Dr Andrew Chan. “These results are likely to contribute to HAND biomarkers becoming established. In the long term, these data may be used to develop therapeutic strategies aiming at retarding HAND progression in HIV-infected patients.” Starting points may include activation of microglial cells – a method that is applied in other autoimmune diseases of the central nervous system, for example in multiple sclerosis.

Start-up through FoRUM funds

The research, which was initiated following a collaboration between clinics for neurology and dermatology, St. Josef Hospital, as well as the Department for Molecular and Medical Virology, has been made possible through start-up funding provided by the Faculty of Medicine at Ruhr-Universität (FoRUM). The collaboration has evolved into an international consortium of clinics and basic research organisations in Bochum, Langen, Strasbourg and Mailand. One objective of the follow-up study, for which an application for EU funds is pending, is going to be an in-depth analysis of inflammatory processes in the central nervous system. The researchers will attempt to inhibit inflammatory processes with different drugs. They are, moreover, planning to study direct cell-cell interaction by means of state-of-the-art microscopy, in collaboration with the University of Strasbourg.

(Image credit: Mehau Kulyk/Science Photo Library)

Filed under dementia neurodegeneration microglia HIV cytokines immune cells neuroscience science

161 notes

Early cerebellum injury hinders neural development, possible root of autism, theory suggests
A brain region largely known for coordinating motor control has a largely overlooked role in childhood development that could reveal information crucial to understanding the onset of autism, according to Princeton University researchers.
The cerebellum — an area located in the lower rear of the brain — is known to process external and internal information such as sensory cues that influence the development of other brain regions, the researchers report in the journal Neuron. Based on a review of existing research, the researchers offer a new theory that an injury to the cerebellum during early life potentially disrupts this process and leads to what they call “developmental diaschisis,” which is when a loss of function in one part of the brain leads to problems in another region.
The researchers specifically apply their theory to autism, though they note that it could help understand other childhood neurological conditions. Conditions within the autism spectrum present “longstanding puzzles” related to cognitive and behavioral disruptions that their ideas could help resolve, they wrote. Under their theory, cerebellar injury causes disruptions in how other areas of the brain develop an ability to interpret external stimuli and organize internal processes, explained first author Sam Wang, an associate professor of molecular biology and the Princeton Neuroscience Institute (PNI).
"It is well known that the cerebellum is an information processor. Our neocortex [the largest part of the brain, responsible for much higher processing] does not receive information unfiltered. There are critical steps that have to happen between when external information is detected by our brain and when it reaches the neural cortex," said Wang, who worked with doctoral student Alexander Kloth and postdoctoral research associate Aleksandra Badura, both in PNI.
"At some point, you learn that smiling is nice because Mom smiles at you. We have all these associations we make in early life because we don’t arrive knowing that a smile is nice," Wang said. "In autism, something in that process goes wrong and one thing could be that sensory information is not processed correctly in the cerebellum."
Mustafa Sahin, a neurologist at Boston’s Children Hospital and associate professor of neurology at Harvard Medical School, said that Wang and his co-authors build upon known links between cerebellar damage and autism to suggest that the cerebellum is essential to healthy neural development. Numerous studies — including from his own lab — support their theory, said Sahin, who is familiar with the work but was not involved in it.
"The association between cerebellar deficits and autism has been around for a while," Sahin said. "What Sam Wang and colleagues do in this perspective article is to synthesize these two themes and hypothesize that in a critical period of development, cerebellar dysfunction may disrupt the maturation of distant neocortical circuits, leading to cognitive and behavioral symptoms including autism."
Traditionally, the cerebellum has been studied in relation to motor movement and coordination in adults. Recent studies, however, strongly suggest that it also influences childhood cognition, Wang said. Several studies also have found a correlation between cerebellar injury and the development of a disorder in the autism spectrum, the researchers report. For instance, the researchers cite a 2007 paper in the journal Pediatrics that found that individuals who experienced cerebellum damage at birth were 40 times more likely to score highly on autism screening tests. They also reference studies in 2004 and 2005 that found that the cerebellum is the most frequently disrupted brain region in people with autism.
"What we realized from looking at the literature is that these two problems — autism and cerebellar injury — might be related to each other" via the cerebellum’s influence on wider neural development, Wang said. "We hope to get people and scientists thinking differently about the cerebellum or about autism so that the whole field can move forward."
The researchers conclude by suggesting methods for testing their theory. First, by inactivating brain-cell electrical activity, it should be possible to pinpoint the developmental stage in which injury to one part of the brain affects the maturation of another. A second, more advanced method is to reconstruct the neural connections between the cerebellum and other brain regions; the federal BRAIN Initiative announced in 2013 aims to map the activity of all the brain’s neurons. Finally, mouse brains can be used to disable and restore brain-region function to observe the “upstream” effect in other areas.

Early cerebellum injury hinders neural development, possible root of autism, theory suggests

A brain region largely known for coordinating motor control has a largely overlooked role in childhood development that could reveal information crucial to understanding the onset of autism, according to Princeton University researchers.

The cerebellum — an area located in the lower rear of the brain — is known to process external and internal information such as sensory cues that influence the development of other brain regions, the researchers report in the journal Neuron. Based on a review of existing research, the researchers offer a new theory that an injury to the cerebellum during early life potentially disrupts this process and leads to what they call “developmental diaschisis,” which is when a loss of function in one part of the brain leads to problems in another region.

The researchers specifically apply their theory to autism, though they note that it could help understand other childhood neurological conditions. Conditions within the autism spectrum present “longstanding puzzles” related to cognitive and behavioral disruptions that their ideas could help resolve, they wrote. Under their theory, cerebellar injury causes disruptions in how other areas of the brain develop an ability to interpret external stimuli and organize internal processes, explained first author Sam Wang, an associate professor of molecular biology and the Princeton Neuroscience Institute (PNI).

"It is well known that the cerebellum is an information processor. Our neocortex [the largest part of the brain, responsible for much higher processing] does not receive information unfiltered. There are critical steps that have to happen between when external information is detected by our brain and when it reaches the neural cortex," said Wang, who worked with doctoral student Alexander Kloth and postdoctoral research associate Aleksandra Badura, both in PNI.

"At some point, you learn that smiling is nice because Mom smiles at you. We have all these associations we make in early life because we don’t arrive knowing that a smile is nice," Wang said. "In autism, something in that process goes wrong and one thing could be that sensory information is not processed correctly in the cerebellum."

Mustafa Sahin, a neurologist at Boston’s Children Hospital and associate professor of neurology at Harvard Medical School, said that Wang and his co-authors build upon known links between cerebellar damage and autism to suggest that the cerebellum is essential to healthy neural development. Numerous studies — including from his own lab — support their theory, said Sahin, who is familiar with the work but was not involved in it.

"The association between cerebellar deficits and autism has been around for a while," Sahin said. "What Sam Wang and colleagues do in this perspective article is to synthesize these two themes and hypothesize that in a critical period of development, cerebellar dysfunction may disrupt the maturation of distant neocortical circuits, leading to cognitive and behavioral symptoms including autism."

Traditionally, the cerebellum has been studied in relation to motor movement and coordination in adults. Recent studies, however, strongly suggest that it also influences childhood cognition, Wang said. Several studies also have found a correlation between cerebellar injury and the development of a disorder in the autism spectrum, the researchers report. For instance, the researchers cite a 2007 paper in the journal Pediatrics that found that individuals who experienced cerebellum damage at birth were 40 times more likely to score highly on autism screening tests. They also reference studies in 2004 and 2005 that found that the cerebellum is the most frequently disrupted brain region in people with autism.

"What we realized from looking at the literature is that these two problems — autism and cerebellar injury — might be related to each other" via the cerebellum’s influence on wider neural development, Wang said. "We hope to get people and scientists thinking differently about the cerebellum or about autism so that the whole field can move forward."

The researchers conclude by suggesting methods for testing their theory. First, by inactivating brain-cell electrical activity, it should be possible to pinpoint the developmental stage in which injury to one part of the brain affects the maturation of another. A second, more advanced method is to reconstruct the neural connections between the cerebellum and other brain regions; the federal BRAIN Initiative announced in 2013 aims to map the activity of all the brain’s neurons. Finally, mouse brains can be used to disable and restore brain-region function to observe the “upstream” effect in other areas.

Filed under cerebellum cerebellar injury autism neural development cognitive development neuroscience science

92 notes

(Image caption: A consensus shape for the calcium ion channel in the worm’s pain receptor nerve that was reached by computer modeling. Credit: Damian van Rossum and Andriy Anishkin, Duke University)
Surprising New Role for Calcium in Sensing Pain
When you accidentally touch a hot oven, you rapidly pull your hand away. Although scientists know the basic neural circuits involved in sensing and responding to such painful stimuli, they are still sorting out the molecular players.
Duke researchers have made a surprising discovery about the role of a key molecule involved in pain in worms, and have built a structural model of the molecule. These discoveries, described Sept. 2 in Nature Communications, may help direct new strategies to treat pain in people.
In humans and other mammals, a family of molecules called TRP ion channels plays a crucial role in nerve cells that directly sense painful stimuli. Researchers are now blocking these channels in clinical trials to evaluate this as a possible treatment for various types of pain.
The roundworm Caenorhabditis elegans also expresses TRP channels — one of which is called OSM-9 — in its single head pain-sensing neuron (which is similar to the pain-sensing nerve cells for the human face). OSM-9 is not only vital for detecting danger signals in the tiny worms, but is also a functional match to TRPV4, a mammalian TRP channel involved in sensing pain.
In the new study, researchers created a series of genetic mutant worms in which parts of the OSM-9 channel were disabled or replaced and then tested the engineered worms’ reactions to overly salty solution, which is normally aversive and painful.
Specifically, the mutant worms had alterations in the pore of the OSM-9 channels in their pain-sensing neuron, which gets fired up upon channel activation to allow calcium and sodium to flow into the neuron. That, in turn, was thought to switch on the neural circuit that encodes rapid withdrawal behavior — like pulling the finger from the stove.
“People strongly believed that calcium entering the cell through the TRP channel is everything in terms of cellular activation,” said lead author Wolfgang Liedtke, M.D., Ph.D., an associate professor of neurology, anesthesiology and neurobiology at Duke University School of Medicine and an attending physician in the Duke Pain Clinics, where he sees patients with chronic head-neck and face-pain.
With then-graduate student Amanda Lindy, “we wanted to systemically mutagenize the OSM-9 pore and see what we could find in the live animal, in its pain behavior,” Liedtke said.
To the group’s surprise, changing various bits of OSM-9’s pore did not change most of the mutant worms’ reactions to the salty solution. However, these mutations did affect the flow of calcium into the cell. The disconnect they saw suggested the calcium was not playing a direct role in the worms’ avoidance of danger signals.
Calcium has been thought to be indispensable for pain behavior — not only in worms’ channels but in pain-related TRP channels in mammals. So results from the engineered OSM-9 mutant worms will change a central concept for the understanding of pain, Liedtke said.
To see whether calcium might instead play a role in the worms’ ability to adapt to repeated painful stimuli, the group then repeatedly exposed pore-mutant worms to the aversive and pain stimuli.
After the tenth trial, a normal worm becomes less sensitive to high salt. But one mutant worm with a minimal change to one specific part of its OSM-9 pore — altered so that calcium no longer entered but sodium did — was just as sensitive on the tenth trial as on the first.
The results confirmed that calcium flow through the channel makes the worms more adaptable to painful stimuli; it helps them cope with the onslaught by desensitizing them. This could well represent a survival advantage, Liedtke said.
To put the findings into a structural context, Liedtke collaborated with computational protein scientists Damian van Rossum and Andriy Anishkin from Penn State University, who built a structural model of OSM-9 that was based on established structures of several of the channel’s relatives, including the recently resolved structure of TRPV1, the molecule that senses pain caused by heat and hot chili peppers.
The team was then able to visualize the key parts of the OSM-9 pore in the context of the entire channel. They understood better how the pore holds its shape and allows sodium and calcium to pass.
Liedtke said that understanding this structure could be a great help in designing compounds that will not completely block the channel but will just prevent calcium from entering the cell. Although calcium helps desensitize worms to painful stimuli in the near term, it might set up chronic, pathological pain circuits in the long term, Liedtke said.
So, as a next step, the group plans to assess the longer-term effects calcium flow has in pain neurons. For example, calcium could change the expression of particular genes in the sensory neuron. And such gene expression changes could underlie chronic, pathologic pain.
“We assume, and so far the evidence is quite good, that chronic, pathological pain has to do with people’s genetic switches in their sensory system set in the wrong way, long term. That’s something our new worm model will now allow us to approach rationally by experimentation,” Liedtke said.

(Image caption: A consensus shape for the calcium ion channel in the worm’s pain receptor nerve that was reached by computer modeling. Credit: Damian van Rossum and Andriy Anishkin, Duke University)

Surprising New Role for Calcium in Sensing Pain

When you accidentally touch a hot oven, you rapidly pull your hand away. Although scientists know the basic neural circuits involved in sensing and responding to such painful stimuli, they are still sorting out the molecular players.

Duke researchers have made a surprising discovery about the role of a key molecule involved in pain in worms, and have built a structural model of the molecule. These discoveries, described Sept. 2 in Nature Communications, may help direct new strategies to treat pain in people.

In humans and other mammals, a family of molecules called TRP ion channels plays a crucial role in nerve cells that directly sense painful stimuli. Researchers are now blocking these channels in clinical trials to evaluate this as a possible treatment for various types of pain.

The roundworm Caenorhabditis elegans also expresses TRP channels — one of which is called OSM-9 — in its single head pain-sensing neuron (which is similar to the pain-sensing nerve cells for the human face). OSM-9 is not only vital for detecting danger signals in the tiny worms, but is also a functional match to TRPV4, a mammalian TRP channel involved in sensing pain.

In the new study, researchers created a series of genetic mutant worms in which parts of the OSM-9 channel were disabled or replaced and then tested the engineered worms’ reactions to overly salty solution, which is normally aversive and painful.

Specifically, the mutant worms had alterations in the pore of the OSM-9 channels in their pain-sensing neuron, which gets fired up upon channel activation to allow calcium and sodium to flow into the neuron. That, in turn, was thought to switch on the neural circuit that encodes rapid withdrawal behavior — like pulling the finger from the stove.

“People strongly believed that calcium entering the cell through the TRP channel is everything in terms of cellular activation,” said lead author Wolfgang Liedtke, M.D., Ph.D., an associate professor of neurology, anesthesiology and neurobiology at Duke University School of Medicine and an attending physician in the Duke Pain Clinics, where he sees patients with chronic head-neck and face-pain.

With then-graduate student Amanda Lindy, “we wanted to systemically mutagenize the OSM-9 pore and see what we could find in the live animal, in its pain behavior,” Liedtke said.

To the group’s surprise, changing various bits of OSM-9’s pore did not change most of the mutant worms’ reactions to the salty solution. However, these mutations did affect the flow of calcium into the cell. The disconnect they saw suggested the calcium was not playing a direct role in the worms’ avoidance of danger signals.

Calcium has been thought to be indispensable for pain behavior — not only in worms’ channels but in pain-related TRP channels in mammals. So results from the engineered OSM-9 mutant worms will change a central concept for the understanding of pain, Liedtke said.

To see whether calcium might instead play a role in the worms’ ability to adapt to repeated painful stimuli, the group then repeatedly exposed pore-mutant worms to the aversive and pain stimuli.

After the tenth trial, a normal worm becomes less sensitive to high salt. But one mutant worm with a minimal change to one specific part of its OSM-9 pore — altered so that calcium no longer entered but sodium did — was just as sensitive on the tenth trial as on the first.

The results confirmed that calcium flow through the channel makes the worms more adaptable to painful stimuli; it helps them cope with the onslaught by desensitizing them. This could well represent a survival advantage, Liedtke said.

To put the findings into a structural context, Liedtke collaborated with computational protein scientists Damian van Rossum and Andriy Anishkin from Penn State University, who built a structural model of OSM-9 that was based on established structures of several of the channel’s relatives, including the recently resolved structure of TRPV1, the molecule that senses pain caused by heat and hot chili peppers.

The team was then able to visualize the key parts of the OSM-9 pore in the context of the entire channel. They understood better how the pore holds its shape and allows sodium and calcium to pass.

Liedtke said that understanding this structure could be a great help in designing compounds that will not completely block the channel but will just prevent calcium from entering the cell. Although calcium helps desensitize worms to painful stimuli in the near term, it might set up chronic, pathological pain circuits in the long term, Liedtke said.

So, as a next step, the group plans to assess the longer-term effects calcium flow has in pain neurons. For example, calcium could change the expression of particular genes in the sensory neuron. And such gene expression changes could underlie chronic, pathologic pain.

“We assume, and so far the evidence is quite good, that chronic, pathological pain has to do with people’s genetic switches in their sensory system set in the wrong way, long term. That’s something our new worm model will now allow us to approach rationally by experimentation,” Liedtke said.

Filed under pain ion channels calcium influx C. elegans neurons neuroscience science

305 notes

Research hints at why stress is more devastating for some
Some people take stress in stride; others are done in by it. New research at Rockefeller University has identified the molecular mechanisms of this so-called stress gap in mice with very similar genetic backgrounds — a finding that could lead researchers to better understand the development of psychiatric disorders such as anxiety and depression.
“Like people, each animal has unique experiences as it goes through its life. And we suspect that these life experiences can alter the expression of genes, and as a result, affect an animal’s susceptibility to stress,” says senior author Bruce McEwen, Alfred E. Mirsky Professor and head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology. “We have taken an important step toward explaining the molecular origins of this stress gap by showing that inbred mice react differently to stress, with some developing behaviors that resemble anxiety and depression, and others remaining resilient.”
The results, published September 2 in Molecular Psychiatry, point toward potential new markers to aid the diagnosis of stress-related disorders, such as anxiety and depression, and a promising route to the development of new treatments for these devastating disorders.
In experiments, researchers stressed the mice by exposing them to daily, unpredictable bouts of cage tilting, altered dark-light cycles, confinement in tight spaces and other conditions mice dislike with the goal of reproducing the sort of stressful experiences thought to be a primary cause of depression in humans. Afterward, in tests to see if the mice displayed the rodent equivalent of anxiety and depression symptoms, they found about 40 percent showed high levels of behaviors that included a preference for a dark compartment over a brightly lit one, or a loss of interest in sugar water. The remaining 60 percent recovered well from the stress. This distinction between the susceptible mice and the resilient ones was so fundamental that it emerged even before the mice were subjected to stress; some unstressed mice showed an anxiety-like preference for a dark compartment over a lighted one.
The researchers found that the highly stress-susceptible mice had less of an important molecule known as mGlu2 in a stress-involved region of the brain known as the hippocampus. The mGlu2 decrease, they determined, resulted from an epigenetic change, which affects the expression of genes, in this case the gene that codes for mGlu2.
“If you think of the genetic code as words in a book, the book must be opened in order for you to read it. These epigenetic changes, which affect histone proteins associated with DNA, effectively close the book, so the code for mGlu2 cannot be read,” says first author Carla Nasca, a postdoc in the lab and a fellow of the American Foundation for Suicide Prevention. Previously, she and colleagues implicated mGlu2 in depression when they showed that a promising potential treatment known as acetyl carnitine rapidly alleviated depression-like symptoms in rats and mice by reversing these epigenetic changes to mGlu2 and causing its levels to increase.
“Currently, depression is diagnosed only by its symptoms,” Nasca says. “But these results put us on track to discover molecular signatures in humans that may have the potential to serve as markers for certain types of depression. Our work could also lead to a new generation of rapidly acting antidepressants, such as the candidate acetyl carnitine, which would be particularly important to reduce the risk of suicide.”
A reduction in mGlu2 matters because this molecule regulates the neurotransmitter glutamate. While glutamate plays a crucial role relaying messages between neurons as part of many important processes, too much can lead to harmful structural changes in the brain.
“The brain is constantly changing. When stressful experiences lead to anxiety and depressive disorders the brain becomes locked in a state it cannot spontaneously escape,” McEwen says. “Studies like this one are increasingly focusing on the regulation of glutamate as an underlying mechanism in depression and, we hope, opening promising new avenues for the diagnosis and treatment of this devastating disorder.”

Research hints at why stress is more devastating for some

Some people take stress in stride; others are done in by it. New research at Rockefeller University has identified the molecular mechanisms of this so-called stress gap in mice with very similar genetic backgrounds — a finding that could lead researchers to better understand the development of psychiatric disorders such as anxiety and depression.

“Like people, each animal has unique experiences as it goes through its life. And we suspect that these life experiences can alter the expression of genes, and as a result, affect an animal’s susceptibility to stress,” says senior author Bruce McEwen, Alfred E. Mirsky Professor and head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology. “We have taken an important step toward explaining the molecular origins of this stress gap by showing that inbred mice react differently to stress, with some developing behaviors that resemble anxiety and depression, and others remaining resilient.”

The results, published September 2 in Molecular Psychiatry, point toward potential new markers to aid the diagnosis of stress-related disorders, such as anxiety and depression, and a promising route to the development of new treatments for these devastating disorders.

In experiments, researchers stressed the mice by exposing them to daily, unpredictable bouts of cage tilting, altered dark-light cycles, confinement in tight spaces and other conditions mice dislike with the goal of reproducing the sort of stressful experiences thought to be a primary cause of depression in humans. Afterward, in tests to see if the mice displayed the rodent equivalent of anxiety and depression symptoms, they found about 40 percent showed high levels of behaviors that included a preference for a dark compartment over a brightly lit one, or a loss of interest in sugar water. The remaining 60 percent recovered well from the stress. This distinction between the susceptible mice and the resilient ones was so fundamental that it emerged even before the mice were subjected to stress; some unstressed mice showed an anxiety-like preference for a dark compartment over a lighted one.

The researchers found that the highly stress-susceptible mice had less of an important molecule known as mGlu2 in a stress-involved region of the brain known as the hippocampus. The mGlu2 decrease, they determined, resulted from an epigenetic change, which affects the expression of genes, in this case the gene that codes for mGlu2.

“If you think of the genetic code as words in a book, the book must be opened in order for you to read it. These epigenetic changes, which affect histone proteins associated with DNA, effectively close the book, so the code for mGlu2 cannot be read,” says first author Carla Nasca, a postdoc in the lab and a fellow of the American Foundation for Suicide Prevention. Previously, she and colleagues implicated mGlu2 in depression when they showed that a promising potential treatment known as acetyl carnitine rapidly alleviated depression-like symptoms in rats and mice by reversing these epigenetic changes to mGlu2 and causing its levels to increase.

“Currently, depression is diagnosed only by its symptoms,” Nasca says. “But these results put us on track to discover molecular signatures in humans that may have the potential to serve as markers for certain types of depression. Our work could also lead to a new generation of rapidly acting antidepressants, such as the candidate acetyl carnitine, which would be particularly important to reduce the risk of suicide.”

A reduction in mGlu2 matters because this molecule regulates the neurotransmitter glutamate. While glutamate plays a crucial role relaying messages between neurons as part of many important processes, too much can lead to harmful structural changes in the brain.

“The brain is constantly changing. When stressful experiences lead to anxiety and depressive disorders the brain becomes locked in a state it cannot spontaneously escape,” McEwen says. “Studies like this one are increasingly focusing on the regulation of glutamate as an underlying mechanism in depression and, we hope, opening promising new avenues for the diagnosis and treatment of this devastating disorder.”

Filed under stress anxiety disorders hippocampus mGlu2 gene expression neuroscience science

219 notes

(Image caption: Aggressor cells, which have the potential to cause autoimmunity, are targeted by treatment, causing conversion of these cells to protector cells. Gene expression changes gradually at each stage of treatment, as illustrated by the color changes in this series of heat maps. Credit: University of Bristol/Dr. Bronwen Burton)
Scientists discover how to ‘switch off’ autoimmune diseases
Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue.
Rather than the body’s immune system destroying its own tissue by mistake, researchers at the University of Bristol have discovered how cells convert from being aggressive to actually protecting against disease.
The study, funded by the Wellcome Trust, is published in Nature Communications.
It’s hoped this latest insight will lead to the widespread use of antigen-specific immunotherapy as a treatment for many autoimmune disorders, including multiple sclerosis (MS), type 1 diabetes, Graves’ disease and systemic lupus erythematosus (SLE).
MS alone affects around 100,000 people in the UK and 2.5 million people worldwide.
Scientists were able to selectively target the cells that cause autoimmune disease by dampening down their aggression against the body’s own tissues while converting them into cells capable of protecting against disease.
This type of conversion has been previously applied to allergies, known as ‘allergic desensitisation’, but its application to autoimmune diseases has only been appreciated recently.
The Bristol group has now revealed how the administration of fragments of the proteins that are normally the target for attack leads to correction of the autoimmune response.
Most importantly, their work reveals that effective treatment is achieved by gradually increasing the dose of antigenic fragment injected.
In order to figure out how this type of immunotherapy works, the scientists delved inside the immune cells themselves to see which genes and proteins were turned on or off by the treatment.
They found changes in gene expression that help explain how effective treatment leads to conversion of aggressor into protector cells. The outcome is to reinstate self-tolerance whereby an individual’s immune system ignores its own tissues while remaining fully armed to protect against infection.
By specifically targeting the cells at fault, this immunotherapeutic approach avoids the need for the immune suppressive drugs associated with unacceptable side effects such as infections, development of tumours and disruption of natural regulatory mechanisms.
Professor David Wraith, who led the research, said: “Insight into the molecular basis of antigen-specific immunotherapy opens up exciting new opportunities to enhance the selectivity of the approach while providing valuable markers with which to measure effective treatment. These findings have important implications for the many patients suffering from autoimmune conditions that are currently difficult to treat.”
This treatment approach, which could improve the lives of millions of people worldwide, is currently undergoing clinical development through biotechnology company Apitope, a spin-out from the University of Bristol.

(Image caption: Aggressor cells, which have the potential to cause autoimmunity, are targeted by treatment, causing conversion of these cells to protector cells. Gene expression changes gradually at each stage of treatment, as illustrated by the color changes in this series of heat maps. Credit: University of Bristol/Dr. Bronwen Burton)

Scientists discover how to ‘switch off’ autoimmune diseases

Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue.

Rather than the body’s immune system destroying its own tissue by mistake, researchers at the University of Bristol have discovered how cells convert from being aggressive to actually protecting against disease.

The study, funded by the Wellcome Trust, is published in Nature Communications.

It’s hoped this latest insight will lead to the widespread use of antigen-specific immunotherapy as a treatment for many autoimmune disorders, including multiple sclerosis (MS), type 1 diabetes, Graves’ disease and systemic lupus erythematosus (SLE).

MS alone affects around 100,000 people in the UK and 2.5 million people worldwide.

Scientists were able to selectively target the cells that cause autoimmune disease by dampening down their aggression against the body’s own tissues while converting them into cells capable of protecting against disease.

This type of conversion has been previously applied to allergies, known as ‘allergic desensitisation’, but its application to autoimmune diseases has only been appreciated recently.

The Bristol group has now revealed how the administration of fragments of the proteins that are normally the target for attack leads to correction of the autoimmune response.

Most importantly, their work reveals that effective treatment is achieved by gradually increasing the dose of antigenic fragment injected.

In order to figure out how this type of immunotherapy works, the scientists delved inside the immune cells themselves to see which genes and proteins were turned on or off by the treatment.

They found changes in gene expression that help explain how effective treatment leads to conversion of aggressor into protector cells. The outcome is to reinstate self-tolerance whereby an individual’s immune system ignores its own tissues while remaining fully armed to protect against infection.

By specifically targeting the cells at fault, this immunotherapeutic approach avoids the need for the immune suppressive drugs associated with unacceptable side effects such as infections, development of tumours and disruption of natural regulatory mechanisms.

Professor David Wraith, who led the research, said: “Insight into the molecular basis of antigen-specific immunotherapy opens up exciting new opportunities to enhance the selectivity of the approach while providing valuable markers with which to measure effective treatment. These findings have important implications for the many patients suffering from autoimmune conditions that are currently difficult to treat.”

This treatment approach, which could improve the lives of millions of people worldwide, is currently undergoing clinical development through biotechnology company Apitope, a spin-out from the University of Bristol.

Filed under MS autoimmune diseases immune system immune cells gene expression neuroscience science

free counters