Neuroscience

Articles and news from the latest research reports.

149 notes

Shedding a light on pain: A technique developed by Stanford bioengineers could lead to new treatments
The mice in Scott Delp’s lab, unlike their human counterparts, can get pain relief from the glow of a yellow light.
Right now these mice are helping scientists to study pain – how and why it occurs and why some people feel it so intensely without any obvious injury. But Delp, a professor of bioengineering and mechanical engineering, hopes one day the work he does with these mice can also help people who are in chronic, debilitating pain.
"This is an entirely new approach to study a huge public health issue," Delp said. "It’s a completely new tool that is now available to neuroscientists everywhere." He is the senior author of a research paper published Feb. 16 in Nature Biotechnology.
A switch for pain
The mice are modified with gene therapy to have pain-sensing nerves that can be controlled by light. One color of light makes the mice more sensitive to pain. Another reduces pain. The scientists shone a light on the paws of mice through the Plexiglas bottom of the cage.
Graduate students Shrivats Iyer and Kate Montgomery, who led the study, say it opens the door to future experiments to understand the nature of pain and also touch and other sensations that are part of our daily lives but little understood.
"The fact that we can give a mouse an injection and two weeks later shine a light on its paw to change the way it senses pain is very powerful," Iyer said.
For example, increasing or decreasing the sensation of pain in these mice could help scientists understand why pain seems to continue in people after an injury has healed. Does persistent pain change those nerves in some way? If so, how can they be changed back to a state where, in the absence of an injury, they stop sending searing messages of pain to the brain?
Leaders at the National Institutes of Health agree that the work could have important implications for treating pain. “This powerful approach shows great potential for helping the millions who suffer pain from nerve damage,” said Linda Porter, the pain policy adviser at the National Institute of Neurological Disorders and Stroke and a leader of the NIH’s Pain Consortium.
"Now, with a flick of a switch, scientists may be able to rapidly test new pain-relieving medications and, one day, doctors may be able to use light to relieve pain," she said.
Accidental discovery
The researchers took advantage of a technique called optogenetics, which involves light-sensitive proteins called opsins that are inserted into the nerves. Optogenetics was developed by Delp’s colleague Karl Deisseroth, a co-author of the journal article. He has used the technique as a way of activating precise regions of the brain to better understand how the brain functions. Deisseroth is a professor of bioengineering, psychiatry and behavioral sciences.
Delp, who has an interest in muscles and movement, saw the potential for using optogenetics not just for studying the brain – interesting though those studies may be – but also for studying the many nerves outside the brain. These are the nerves that control movement, pain, touch and other sensations throughout our body, and that are involved in diseases such as amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease.
A few years ago Stanford Bio-X, which encourages interdisciplinary projects such as this one, supported Delp and Deisseroth in their efforts to use optogenetics to control the nerves that excite muscles. In the process of doing that work, Delp said, his student at the time, Michael Llewellyn, occasionally found that he had placed the opsins into nerves that signal pain rather than those that control muscle.
That accident sparked a new line of research. Delp said, “We thought, ‘Wow, we’re getting pain neurons; that could be really important.’” He suggested that Montgomery and Iyer focus on those pain nerves that had been a byproduct of the muscle work.
A faster approach
A key component of the work was a new approach to quickly incorporate opsins into the nerves of mice. The researchers started with a virus that had been engineered to contain the DNA that produces the opsin. Then they injected those modified viruses directly into mouse nerves. Weeks later, only the nerves that control pain had incorporated the opsin proteins and would fire, or be less likely to fire, in response to different colors of light.
The speed of the viral approach makes it very flexible, both for this pain work and for future studies. Researchers are developing newer forms of opsins with different properties, such as responding to different colors of light. “Because we used a viral approach we could, in the future, quickly turn around and use newer opsins,” said Montgomery, who is a Stanford Bio-X fellow.
This entire project, which spans bioengineering, neuroscience and psychiatry, is one Delp says could never have happened without the environment at Stanford that supports collaboration across departments. The pain portion of the research came out of support from NeuroVentures, which was a project incubated within Bio-X to support the intersection of neuroscience and engineering or other disciplines. That project was so successful it has spun off into the Stanford Neurosciences Institute, of which Delp is now a deputy director.
Delp said that many challenges must be met before results of these experiments – either new drugs based on what they learn, or optogenetics directly – could become available to people but that he always has that as a goal.
"Developing a new therapy from the ground up would be incredibly rewarding," he said. "Most people don’t get to do that in their careers."
Delp and Deisseroth have started a company called Circuit Therapeutics to develop therapies based on optogenetics.

Shedding a light on pain: A technique developed by Stanford bioengineers could lead to new treatments

The mice in Scott Delp’s lab, unlike their human counterparts, can get pain relief from the glow of a yellow light.

Right now these mice are helping scientists to study pain – how and why it occurs and why some people feel it so intensely without any obvious injury. But Delp, a professor of bioengineering and mechanical engineering, hopes one day the work he does with these mice can also help people who are in chronic, debilitating pain.

"This is an entirely new approach to study a huge public health issue," Delp said. "It’s a completely new tool that is now available to neuroscientists everywhere." He is the senior author of a research paper published Feb. 16 in Nature Biotechnology.

A switch for pain

The mice are modified with gene therapy to have pain-sensing nerves that can be controlled by light. One color of light makes the mice more sensitive to pain. Another reduces pain. The scientists shone a light on the paws of mice through the Plexiglas bottom of the cage.

Graduate students Shrivats Iyer and Kate Montgomery, who led the study, say it opens the door to future experiments to understand the nature of pain and also touch and other sensations that are part of our daily lives but little understood.

"The fact that we can give a mouse an injection and two weeks later shine a light on its paw to change the way it senses pain is very powerful," Iyer said.

For example, increasing or decreasing the sensation of pain in these mice could help scientists understand why pain seems to continue in people after an injury has healed. Does persistent pain change those nerves in some way? If so, how can they be changed back to a state where, in the absence of an injury, they stop sending searing messages of pain to the brain?

Leaders at the National Institutes of Health agree that the work could have important implications for treating pain. “This powerful approach shows great potential for helping the millions who suffer pain from nerve damage,” said Linda Porter, the pain policy adviser at the National Institute of Neurological Disorders and Stroke and a leader of the NIH’s Pain Consortium.

"Now, with a flick of a switch, scientists may be able to rapidly test new pain-relieving medications and, one day, doctors may be able to use light to relieve pain," she said.

Accidental discovery

The researchers took advantage of a technique called optogenetics, which involves light-sensitive proteins called opsins that are inserted into the nerves. Optogenetics was developed by Delp’s colleague Karl Deisseroth, a co-author of the journal article. He has used the technique as a way of activating precise regions of the brain to better understand how the brain functions. Deisseroth is a professor of bioengineering, psychiatry and behavioral sciences.

Delp, who has an interest in muscles and movement, saw the potential for using optogenetics not just for studying the brain – interesting though those studies may be – but also for studying the many nerves outside the brain. These are the nerves that control movement, pain, touch and other sensations throughout our body, and that are involved in diseases such as amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease.

A few years ago Stanford Bio-X, which encourages interdisciplinary projects such as this one, supported Delp and Deisseroth in their efforts to use optogenetics to control the nerves that excite muscles. In the process of doing that work, Delp said, his student at the time, Michael Llewellyn, occasionally found that he had placed the opsins into nerves that signal pain rather than those that control muscle.

That accident sparked a new line of research. Delp said, “We thought, ‘Wow, we’re getting pain neurons; that could be really important.’” He suggested that Montgomery and Iyer focus on those pain nerves that had been a byproduct of the muscle work.

A faster approach

A key component of the work was a new approach to quickly incorporate opsins into the nerves of mice. The researchers started with a virus that had been engineered to contain the DNA that produces the opsin. Then they injected those modified viruses directly into mouse nerves. Weeks later, only the nerves that control pain had incorporated the opsin proteins and would fire, or be less likely to fire, in response to different colors of light.

The speed of the viral approach makes it very flexible, both for this pain work and for future studies. Researchers are developing newer forms of opsins with different properties, such as responding to different colors of light. “Because we used a viral approach we could, in the future, quickly turn around and use newer opsins,” said Montgomery, who is a Stanford Bio-X fellow.

This entire project, which spans bioengineering, neuroscience and psychiatry, is one Delp says could never have happened without the environment at Stanford that supports collaboration across departments. The pain portion of the research came out of support from NeuroVentures, which was a project incubated within Bio-X to support the intersection of neuroscience and engineering or other disciplines. That project was so successful it has spun off into the Stanford Neurosciences Institute, of which Delp is now a deputy director.

Delp said that many challenges must be met before results of these experiments – either new drugs based on what they learn, or optogenetics directly – could become available to people but that he always has that as a goal.

"Developing a new therapy from the ground up would be incredibly rewarding," he said. "Most people don’t get to do that in their careers."

Delp and Deisseroth have started a company called Circuit Therapeutics to develop therapies based on optogenetics.

Filed under optogenetics opsins pain neuroscience science

94 notes

Study identifies new drug target for chronic, touch-evoked pain

Researchers at the School of Medicine have identified a subset of nerve cells that mediates a form of chronic, touch-evoked pain called tactile allodynia, a condition that is resistant to conventional pain medication.

The discovery could point researchers to more fruitful efforts to develop effective drugs for the condition.

Touch-evoked pain occurs as part of a larger neuropathic pain condition arising from damage or disruption of nerve-cell circuits or signals caused by disorders such as alcoholism, diabetes, shingles and AIDS, or procedures such as spine surgery and chemotherapy. For patients with tactile allodynia, the slightest touch — a gentle caress or the brush of shirt against skin — can cause excruciating pain because changes in nerve-cell signals or networks trick the brain into mistaking touch for pain.

The study, published online Feb. 27 in Neuron, found that these “touch” neurons are different from the usual “pain” neurons that respond to stimuli such as cuts or bruises.

Unlike pain caused by such wounds, neuropathic pain is difficult to manage because little can be done to repair nerve damage. Managing it may require strong painkillers or combinations of treatments.

Common painkillers such as morphine have little effect on touch-evoked pain, possibly because they don’t target the touch neurons, the authors say. Morphine binds to specific protein-binding sites on pain neurons called mu opioid receptors, or MORs, and cuts off the their signals so that the brain can no longer sense pain.

However, the touch neurons do not carry MORs, which is why morphine cannot bind to them and block the pain. Instead, they carry delta opioid receptors, or DORs, whose role in pain control has been unclear until recently.

"That’s been the problem so far; any type of severe pain you have, you go into the clinic and very likely you will be treated with morphine-like opioids," said Gregory Scherrer, PharmD, PhD, the senior author of the study and an assistant professor of anesthesia. "You can give some of these patients as much morphine as you want; it won’t work if the mu opioid receptor is not present on the neurons that underlie that type of pain."

There are currently no Food and Drug Administration-approved pain-control drugs that target DORs. Previous attempts at developing DOR-targeting drugs haven’t succeeded because researchers didn’t know what type of pain such drugs would be useful for, Scherrer said.

Two DOR-binding drugs developed for knee pain by Adolor Corp., a biotechnology firm, for instance, probably failed because there is no compelling evidence that DOR was present or involved. AstraZeneca, another pharmaceutical firm, also had a DOR program but recently stopped its research efforts, Scherrer added.

"Now that we have provided a rationale and mechanism supporting the utility of DOR agonists for cutaneous pain and tactile allodynia, these companies will be able to design trials more carefully to evaluate specifically the drugs’ efficacy against touch-evoked pain," he said.

Earlier studies by Scherrer and others hinted at the presence of special nerve fibers on the skin that might contribute to touch-evoked pain.

In the current study, Scherrer and colleagues used fluorescent mouse models to isolate these neurons and identify how they control touch-evoked pain. They found that DOR can play an inhibitory role in these neurons: When proteins bind to DOR, they cut off communication to the spinal cord, through which sensory signals travel to the brain.
DOR-carrying “touch” neurons pervade the skin and could easily be targeted by drugs in the form of skin patches or topical creams, Scherrer suggested.

"By contrast, most MOR-carrying neurons penetrate internal organs," he said. "That’s why morphine is effective in treating post-surgery pain, for example."

Scherrer and fellow researchers tested two different DOR-binding compounds individually on mice and found that both reduced the mice’s sensitivity to touch-evoked pain.

Preliminary studies also indicate that DOR-targeting drugs might not cause dramatic side effects like morphine does, especially if they can be used topically, Scherrer said.

"Morphine and other MOR-targeting drugs have myriad deleterious side effects — including addiction, respiratory depression, constipation, nausea and vomiting — that further limits their utility for chronic pain management," he said.

The next step is to determine whether DOR could be a target for other types of pain, such as arthritis pain, pain from bone cancer and muscle pain, Scherrer added.

The findings also suggest that the body’s opioid system — normally associated with pain and addiction — may also respond to other stimuli such as touch.

"We may have underestimated the importance of the opioid system and what can be achieved with drugs targeting other subtypes of opioid receptors," Scherrer said.

(Source: med.stanford.edu)

Filed under tactile allodynia pain neuropathic pain opioid receptors morphine neuroscience science

106 notes

Study debunks alcohol consumption assertions
ALCOHOL consumption is not a direct cause of cognitive impairment in older men later in life, a study conducted by the University of Western Australia has found. 
The study, published in the Journal of Neurology, used Mendelian randomisation to analyse the genetic data from 3,542 men between the ages of 65 and 83 years. 
The scientists measured the participants’ cognitive function three to eight years after recording their alcohol consumption. 
Lead author, Western Australian Centre for Health and Ageing Director and UWA Professor Osvaldo Almeida says the team investigated the triangular association between alcohol consumption, cognitive impairment and a genetic polymorphism that modulates the efficiency of a critical enzyme of alcohol metabolism. 
“We found a genetic variation that increases absenteeism and decreases the total amount of alcohol consumed,” Prof Almeida says.
“If alcohol were a cause of cognitive impairment, one would expect that this genetic variation would be associated with lower risk of cognitive impairment in later life [because people with this genetic variation drink less or not at all]. 
“That was not the case. Hence, we concluded that the association between alcohol use and cognitive impairment is not due to a direct effect of alcohol.”
The study also presented results that are consistent with the possibility, but do not necessarily prove, that regular moderate drinking decreases the risk of cognitive impairment in older men.
Prof Almeida says the reasons for these results were unclear.
“But evidence from a randomised trial looking at the effect of the Mediterranean diet [which includes nuts, olive oil, vegetables and wine] on health outcomes is supportive of this hypothesis,” he says. 
“One may argue that people who drink in moderation have a lifestyle where, in general, things are done in moderation. 
“This approach to life may decrease health hazards in general.”
Prof Almeida says that although the results didn’t show alcohol affecting cognitive impairment, other studies have found excessive alcohol use to be associated with worse physical health, widowhood and poor social support. 
“[These studies] led to the assumption that alcohol must directly damage the brain and cause cognitive impairment,” he says. 
“This study shows that such an assumption is wrong. 
“It also suggests that alcohol may have a small protective effect that we need to understand better in order to develop new interventions that might contribute to prevent dementia without all the bad outcomes associated with alcohol.”

Study debunks alcohol consumption assertions

ALCOHOL consumption is not a direct cause of cognitive impairment in older men later in life, a study conducted by the University of Western Australia has found. 

The study, published in the Journal of Neurology, used Mendelian randomisation to analyse the genetic data from 3,542 men between the ages of 65 and 83 years. 

The scientists measured the participants’ cognitive function three to eight years after recording their alcohol consumption. 

Lead author, Western Australian Centre for Health and Ageing Director and UWA Professor Osvaldo Almeida says the team investigated the triangular association between alcohol consumption, cognitive impairment and a genetic polymorphism that modulates the efficiency of a critical enzyme of alcohol metabolism. 

“We found a genetic variation that increases absenteeism and decreases the total amount of alcohol consumed,” Prof Almeida says.

“If alcohol were a cause of cognitive impairment, one would expect that this genetic variation would be associated with lower risk of cognitive impairment in later life [because people with this genetic variation drink less or not at all]. 

“That was not the case. Hence, we concluded that the association between alcohol use and cognitive impairment is not due to a direct effect of alcohol.”

The study also presented results that are consistent with the possibility, but do not necessarily prove, that regular moderate drinking decreases the risk of cognitive impairment in older men.

Prof Almeida says the reasons for these results were unclear.

“But evidence from a randomised trial looking at the effect of the Mediterranean diet [which includes nuts, olive oil, vegetables and wine] on health outcomes is supportive of this hypothesis,” he says. 

“One may argue that people who drink in moderation have a lifestyle where, in general, things are done in moderation. 

“This approach to life may decrease health hazards in general.”

Prof Almeida says that although the results didn’t show alcohol affecting cognitive impairment, other studies have found excessive alcohol use to be associated with worse physical health, widowhood and poor social support. 

“[These studies] led to the assumption that alcohol must directly damage the brain and cause cognitive impairment,” he says. 

“This study shows that such an assumption is wrong. 

“It also suggests that alcohol may have a small protective effect that we need to understand better in order to develop new interventions that might contribute to prevent dementia without all the bad outcomes associated with alcohol.”

Filed under alcohol consumption alcohol cognitive impairment genetic polymorphism neuroscience science

204 notes

Blood Test Identifies Those At-Risk for Cognitive Decline, Alzheimer’s Within 3 Years
Researchers have discovered and validated a blood test that can predict with greater than 90 percent accuracy if a healthy person will develop mild cognitive impairment or Alzheimer’s disease within three years.
Described in the April issue of Nature Medicine, the study heralds the potential for developing treatment strategies for Alzheimer’s at an earlier stage, when therapy would be more effective at slowing or preventing onset of symptoms. It is the first known published report of blood-based biomarkers for preclinical Alzheimer’s.
The test identifies 10 lipids, or fats, in the blood that predict disease onset. It could be ready for use in clinical studies in as few as two years and, researchers say, other diagnostic uses are possible.
“Our novel blood test offers the potential to identify people at risk for progressive cognitive decline and can change how patients, their families and treating physicians plan for and manage the disorder,” says the study’s corresponding author Howard J. Federoff, MD, PhD, professor of neurology and executive vice president for health sciences at Georgetown University Medical Center.
There is no cure or effective treatment for Alzheimer’s. Worldwide, about 35.6 million individuals have the disease and, according to the World Health Organization, the number will double every 20 years to 115.4 million people with Alzheimer’s by 2050.
Federoff explains there have been many efforts to develop drugs to slow or reverse the progression of Alzheimer’s disease, but all of them have failed. He says one reason may be the drugs were evaluated too late in the disease process.
“The preclinical state of the disease offers a window of opportunity for timely disease-modifying intervention,” Federoff says. “Biomarkers such as ours that define this asymptomatic period are critical for successful development and application of these therapeutics.”
The study included 525 healthy participants aged 70 and older who gave blood samples upon enrolling and at various points in the study. Over the course of the five-year study, 74 participants met the criteria for either mild Alzheimer’s disease (AD) or a condition known as amnestic mild cognitive impairment (aMCI), in which memory loss is prominent. Of these, 46 were diagnosed upon enrollment and 28 developed aMCI or mild AD during the study (the latter group called converters).
In the study’s third year, the researchers selected 53 participants who developed aMCI/AD (including 18 converters) and 53 cognitively normal matched controls for the lipid biomarker discovery phase of the study. The lipids were not targeted before the start of the study, but rather, were an outcome of the study.
A panel of 10 lipids was discovered, which researchers say appears to reveal the breakdown of neural cell membranes in participants who develop symptoms of cognitive impairment or AD. The panel was subsequently validated using the remaining 21 aMCI/AD participants (including 10 converters), and 20 controls. Blinded data were analyzed to determine if the subjects could be characterized into the correct diagnostic categories based solely on the 10 lipids identified in the discovery phase.
“The lipid panel was able to distinguish with 90 percent accuracy these two distinct groups: cognitively normal participants who would progress to MCI or AD within two to three years, and those who would remain normal in the near future,” Federoff says.
The researchers examined if the presence of the APOE4 gene, a known risk factor for developing AD, would contribute to accurate classification of the groups, but found it was not a significant predictive factor in this study.
“We consider our results a major step toward the commercialization of a preclinical disease biomarker test that could be useful for large-scale screening to identify at-risk individuals,” Federoff says. “We’re designing a clinical trial where we’ll use this panel to identify people at high risk for Alzheimer’s to test a therapeutic agent that might delay or prevent the emergence of the disease.”

Blood Test Identifies Those At-Risk for Cognitive Decline, Alzheimer’s Within 3 Years

Researchers have discovered and validated a blood test that can predict with greater than 90 percent accuracy if a healthy person will develop mild cognitive impairment or Alzheimer’s disease within three years.

Described in the April issue of Nature Medicine, the study heralds the potential for developing treatment strategies for Alzheimer’s at an earlier stage, when therapy would be more effective at slowing or preventing onset of symptoms. It is the first known published report of blood-based biomarkers for preclinical Alzheimer’s.

The test identifies 10 lipids, or fats, in the blood that predict disease onset. It could be ready for use in clinical studies in as few as two years and, researchers say, other diagnostic uses are possible.

“Our novel blood test offers the potential to identify people at risk for progressive cognitive decline and can change how patients, their families and treating physicians plan for and manage the disorder,” says the study’s corresponding author Howard J. Federoff, MD, PhD, professor of neurology and executive vice president for health sciences at Georgetown University Medical Center.

There is no cure or effective treatment for Alzheimer’s. Worldwide, about 35.6 million individuals have the disease and, according to the World Health Organization, the number will double every 20 years to 115.4 million people with Alzheimer’s by 2050.

Federoff explains there have been many efforts to develop drugs to slow or reverse the progression of Alzheimer’s disease, but all of them have failed. He says one reason may be the drugs were evaluated too late in the disease process.

“The preclinical state of the disease offers a window of opportunity for timely disease-modifying intervention,” Federoff says. “Biomarkers such as ours that define this asymptomatic period are critical for successful development and application of these therapeutics.”

The study included 525 healthy participants aged 70 and older who gave blood samples upon enrolling and at various points in the study. Over the course of the five-year study, 74 participants met the criteria for either mild Alzheimer’s disease (AD) or a condition known as amnestic mild cognitive impairment (aMCI), in which memory loss is prominent. Of these, 46 were diagnosed upon enrollment and 28 developed aMCI or mild AD during the study (the latter group called converters).

In the study’s third year, the researchers selected 53 participants who developed aMCI/AD (including 18 converters) and 53 cognitively normal matched controls for the lipid biomarker discovery phase of the study. The lipids were not targeted before the start of the study, but rather, were an outcome of the study.

A panel of 10 lipids was discovered, which researchers say appears to reveal the breakdown of neural cell membranes in participants who develop symptoms of cognitive impairment or AD. The panel was subsequently validated using the remaining 21 aMCI/AD participants (including 10 converters), and 20 controls. Blinded data were analyzed to determine if the subjects could be characterized into the correct diagnostic categories based solely on the 10 lipids identified in the discovery phase.

“The lipid panel was able to distinguish with 90 percent accuracy these two distinct groups: cognitively normal participants who would progress to MCI or AD within two to three years, and those who would remain normal in the near future,” Federoff says.

The researchers examined if the presence of the APOE4 gene, a known risk factor for developing AD, would contribute to accurate classification of the groups, but found it was not a significant predictive factor in this study.

“We consider our results a major step toward the commercialization of a preclinical disease biomarker test that could be useful for large-scale screening to identify at-risk individuals,” Federoff says. “We’re designing a clinical trial where we’ll use this panel to identify people at high risk for Alzheimer’s to test a therapeutic agent that might delay or prevent the emergence of the disease.”

Filed under alzheimer's disease neurodegeneration memory cognitive decline blood test neuroscience medicine science

139 notes

Protein reelin rescues cognitive impairment in animal models of Alzheimer’s disease
The scientists Eduardo Soriano and Lluís Pujadas, from the University of Barcelona (UB), and the “Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas” (CIBERNED) have led research into the role of reelin in animal models of Alzheimer’s disease.
Published today in the journal Nature Communications, the study demonstrates how an increase in the levels of reelin—a protein that is essential for cerebral cortex plasticity—has the capacity to restore cognitive capacity in mouse models of Alzheimer’s disease, delaying amyloid-beta (Αβ) fibril formation in vitro and reducing the accumulation of amyloid deposits in the brains of animals affected by this disease.
The study, which was started four years ago, has involved the collaboration of members of the Peptides and Proteins lab at the Institute for Research in Biomedicine (IRB), namely Bernat Serra-Vidal, PhD student, Ernest Giralt, group leader, and Natàlia Carulla, associate researcher whose investigation focuses on the aggregation of Αβ. Alzheimer’s disease, which affects approximately 500,000 people in Spain, is characterised by the loss of neural connections and by neuronal death, both associated mainly with the formation of senile plaques (extracellular deposits of Aβ) and the presence of neurofibrillary tangles (intracellular deposits of tau protein.
In the IRB lab, researchers have performed experiments in vitro to determine whether there is an interaction between Aβ aggregation and reelin. These assays have revealed that reelin interacts with the Aβ peptide, delaying the formation of Aβ fibrils until it is trapped within them. “When reelins becomes trapped in Aβ fibrils, it loses its capacity to strengthen synaptic plasticity. This explains why an increase in reelin expression in the brain may be beneficial,” explain the authors of the study.
The hypotheses from the work in vitro have been tested in vivo using experimental animals. This study is the first to demonstrate a neuroprotective effect of reelin in neurodegenerative disease and, in addition, offers a possible explanation for this protective role.

Protein reelin rescues cognitive impairment in animal models of Alzheimer’s disease

The scientists Eduardo Soriano and Lluís Pujadas, from the University of Barcelona (UB), and the “Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas” (CIBERNED) have led research into the role of reelin in animal models of Alzheimer’s disease.

Published today in the journal Nature Communications, the study demonstrates how an increase in the levels of reelin—a protein that is essential for cerebral cortex plasticity—has the capacity to restore cognitive capacity in mouse models of Alzheimer’s disease, delaying amyloid-beta (Αβ) fibril formation in vitro and reducing the accumulation of amyloid deposits in the brains of animals affected by this disease.

The study, which was started four years ago, has involved the collaboration of members of the Peptides and Proteins lab at the Institute for Research in Biomedicine (IRB), namely Bernat Serra-Vidal, PhD student, Ernest Giralt, group leader, and Natàlia Carulla, associate researcher whose investigation focuses on the aggregation of Αβ. Alzheimer’s disease, which affects approximately 500,000 people in Spain, is characterised by the loss of neural connections and by neuronal death, both associated mainly with the formation of senile plaques (extracellular deposits of Aβ) and the presence of neurofibrillary tangles (intracellular deposits of tau protein.

In the IRB lab, researchers have performed experiments in vitro to determine whether there is an interaction between Aβ aggregation and reelin. These assays have revealed that reelin interacts with the Aβ peptide, delaying the formation of Aβ fibrils until it is trapped within them. “When reelins becomes trapped in Aβ fibrils, it loses its capacity to strengthen synaptic plasticity. This explains why an increase in reelin expression in the brain may be beneficial,” explain the authors of the study.

The hypotheses from the work in vitro have been tested in vivo using experimental animals. This study is the first to demonstrate a neuroprotective effect of reelin in neurodegenerative disease and, in addition, offers a possible explanation for this protective role.

Filed under alzheimer's disease animal model cognitive impairment reelin beta amyloid neuroscience science

137 notes

Ever-So-Slight Delay Improves Decision-Making Accuracy
Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.
“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.
The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”
In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”
The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.
Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.
The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.
The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.
“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”
The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.
Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”
“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Ever-So-Slight Delay Improves Decision-Making Accuracy

Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.

“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.

The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”

In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”

The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.

Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.

The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.

The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.

“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”

The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.

Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”

“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Filed under decision making attention cognition psychology neuroscience science

366 notes

Inherited Alzheimer’s damage greater decades before symptoms appear



The progression of Alzheimer’s may slow once symptoms appear and do significant damage, according to a study investigating an inherited form of the disease.



In a paper published in the prestigious journal Science Translational Medicine, Professor Colin Masters from the Florey Institute of Neuroscience and Mental Health and University of Melbourne – and colleagues in the UK and US – have found rapid neuronal damage begins 10 to 20 years before symptoms appear.
“As part of this research we have observed other changes in the brain that occur when symptoms begin to appear. There is actually a slowing of the neurodegeneration,” said Professor Masters.Autosomal-dominant Alzheimer’s affects families with a genetic mutation, predisposing them to the crippling disease. These families provide crucial insight into the development of Alzheimer’s because they can be identified years before symptoms develop. The information gleaned from this group will also influence treatment offered to those living with the more common age-related version. Only about one per cent of those with Alzheimer’s have the genetic type of the disease.
The next part of the study involves a clinical trial. Using a range of imaging techniques (MRI and PET) and analysis of blood and cerebrospinal fluid, individuals from the US, UK and Australia will be observed as they trial new drugs to test their safety, side effects and changes within the brain.
 “As part of an international study, family members are invited to be part of a trial in which two experimental drugs are offered many years before symptoms appear,” Prof Masters says. “It’s going to be very interesting to see how clinical intervention affects this group of patients in the decades before symptoms appear.”
The Florey is looking to recruit more participants in the Dominantly Inherited Alzheimer Network (DIAN) study. Those who either know they have a genetic mutation that causes autosomal-dominant Alzheimer’s or who don’t know their genetic status but have a parent or sibling with the mutation are invited to email: dian@florey.edu.au

Inherited Alzheimer’s damage greater decades before symptoms appear

The progression of Alzheimer’s may slow once symptoms appear and do significant damage, according to a study investigating an inherited form of the disease.

In a paper published in the prestigious journal Science Translational Medicine, Professor Colin Masters from the Florey Institute of Neuroscience and Mental Health and University of Melbourne – and colleagues in the UK and US – have found rapid neuronal damage begins 10 to 20 years before symptoms appear.

“As part of this research we have observed other changes in the brain that occur when symptoms begin to appear. There is actually a slowing of the neurodegeneration,” said Professor Masters.
Autosomal-dominant Alzheimer’s affects families with a genetic mutation, predisposing them to the crippling disease. These families provide crucial insight into the development of Alzheimer’s because they can be identified years before symptoms develop. The information gleaned from this group will also influence treatment offered to those living with the more common age-related version. Only about one per cent of those with Alzheimer’s have the genetic type of the disease.

The next part of the study involves a clinical trial. Using a range of imaging techniques (MRI and PET) and analysis of blood and cerebrospinal fluid, individuals from the US, UK and Australia will be observed as they trial new drugs to test their safety, side effects and changes within the brain.

 “As part of an international study, family members are invited to be part of a trial in which two experimental drugs are offered many years before symptoms appear,” Prof Masters says. “It’s going to be very interesting to see how clinical intervention affects this group of patients in the decades before symptoms appear.”

The Florey is looking to recruit more participants in the Dominantly Inherited Alzheimer Network (DIAN) study. Those who either know they have a genetic mutation that causes autosomal-dominant Alzheimer’s or who don’t know their genetic status but have a parent or sibling with the mutation are invited to email: dian@florey.edu.au

Filed under alzheimer's disease neurodegeneration neuroimaging neuroscience science

114 notes

Touching the brain

By examining the sense of touch in stroke patients, a University of Delaware cognitive psychologist has found evidence that the brains of these individuals may be highly plastic even years after being damaged.

The research is published in the March 6 edition of the journal Current Biology, in an article written by Jared Medina, assistant professor of psychology at UD, and Brenda Rapp of Johns Hopkins University’s Department of Cognitive Science. The findings, which are focused on patients who lost the sense of touch in their hands after a stroke, also have potential implications for other impairments caused by brain damage, Medina said.

“Our lab is interested in how the brain represents the body, not just in the sense of touch,” he said. “That involves a lot of different areas of the brain.”

For decades, scientists have been mapping the brain to determine which areas control certain functions, from movement to emotion to memory. In terms of representing the sense of touch, researchers know which specific parts of the brain are associated with representing specific parts of the body, Medina said.

Those scientists also know that, following the brain damage a stroke causes, patients often regain some of what they initially lost due to that damage.

“Even if every neuron has been killed in the part of the brain that represents touch on the hand, that doesn’t mean that you’re never going to feel anything on your hand again,” Medina said. “We’ve known that isn’t the case because the map can reorganize. The brain can change due to injury.”

But what the new research by Medina and Rapp found is that the brains of those stroke patients may change much more easily than the undamaged brains of healthy people — what they call “hyper-lability.”

The researchers worked with people who had had strokes in the past that affected their ability to localize touch. Each research participant, without being able to see his hand, was touched on the wrist and then on the fingertips. When asked to pinpoint the second touch, the stroke patients reported sensing the touch farther down their finger, toward the wrist, rather than in its actual location. 

Medina says that likely occurs because the neural map in the brain is shifting based on the earlier wrist touch — a phenomenon termed “experience-dependent plasticity.”

“Now what’s interesting about this is that when you and I [who haven’t had a stroke] are touched on the wrist, then the fingertips, we don’t have these changes that the brain-damaged individuals do,” he said. “This provides the counterintuitive finding that the maps in brain-damaged individuals are actually much more plastic than in you and me.”

Hyper-plasticity has positive and negative implications, he said.

“On the positive side, this plasticity may potentially be harnessed in rehabilitation to improve function” after a stroke or various other types of brain injury, Medina said. But, he added, the brain may also be so plastic in those cases that changes aren’t stable, creating additional problems.

That’s what he expects additional research to address.

“Now that we’ve found that these maps are more plastic than we thought, can certain strategies help the map become more stable and more accurate again? That’s one of the next questions, and we can only answer it by continuing to learn more about how the mind works.”

(Source: udel.edu)

Filed under brain plasticity stroke brain damage somatosensory cortex neuroscience science

132 notes

Anti-psychotic Medications Offer New Hope in the Battle Against Glioblastoma

ucsdhealthsciences:

Researchers at the University of California, San Diego School of Medicine have discovered that FDA-approved anti-psychotic drugs possess tumor-killing activity against the most aggressive form of primary brain cancer, glioblastoma. The finding was published in this week’s online edition of Oncotarget.

The team of scientists, led by principal investigator, Clark C. Chen, MD, PhD, vice-chairman, UC San Diego, School of Medicine, division of neurosurgery, used a technology platform called shRNA to test how each gene in the human genome contributed to glioblastoma growth.  The discovery that led to the shRNA technology won the Nobel Prize in Physiology/Medicine in 2006.

“ShRNAs are invaluable tools in the study of what genes do. They function like molecular erasers,” said Chen. “We can design these ‘erasers’ against every gene in the human genome. These shRNAs can then be packaged into viruses and introduced into cancer cells. If a gene is required for glioblastoma growth and the shRNA erases the function of that gene, then the cancer cell will either stop growing or die.”

Chen said that one surprising finding is that many genes required for glioblastoma growth are also required for dopamine receptor function. Dopamine is a small molecule that is released by nerve cells and binds to the dopamine receptor in surrounding nerve cells, enabling cell communication.

Abnormal dopamine regulation is associated with Parkinson’s disease, schizophrenia, and Attention Deficit Hyperactivity Disorder. Because of the importance of dopamine in these diseases, drugs have been developed to neutralize the effect of dopamine, called dopamine antagonists. 

Following clues unveiled by their shRNA study, Chen and his team tested the effects of dopamine antagonists against glioblastoma and found that these drugs exert significant anti-tumor effects both in cultured cells and mouse models. These effects are synergistic when combined with other anti-glioblastoma drugs in terms of halting tumor growth.

“The anti-glioblastoma effects of these drugs are completely unexpected and were only uncovered because we carried out an unbiased genetic screen,” said Chen.

“On the clinical front, the finding is important for two reasons,” said Bob Carter, MD, PhD, chairman of UC San Diego, School of Medicine, division of neurosurgery. “First, these drugs are already FDA-cleared for human use in the treatment of other diseases, so it is possible these drugs may be re-purposed for glioblastoma treatment, thereby bypassing years of pre-clinical testing. Second, these drugs have been shown to cross the blood-brain barrier, a barrier that prevents more than 90 percent of drugs from entry into the brain.”

Chen is now working with the UC San Diego Moores Cancer Center Neuro-Oncology team to translate his findings into a clinical trial.

134 notes

Researchers Use Computers to “See” Neurons to Better Understand Brain Function
A study conducted by local high school students and faculty from the Department of Computer and Information Science in the School of Science at Indiana University-Purdue University Indianapolis reveals new information about the motor circuits of the brain that may one day help those developing therapies to treat conditions such as stroke, schizophrenia, spinal cord injury or Alzheimer’s disease.
"MRI and CAT scans of the human brain can tell us many things about the structure of this most complicated of organs, formed of trillions of neurons and the synapses via which they communicate. But we are a long way away from having imaging techniques that can show single neurons in a complex brain like the human brain," said Gavriil Tsechpenakis, Ph.D., assistant professor of computer science in the School of Science at IUPUI.
"But using the tools of artificial intelligence, specifically computer vision and image processing, we are able to visualize and process actual neurons of model organisms. Our work in the brain of a model organism—the fruit fly—will help us and other researchers move forward to more complex organisms with the ultimate goal of reconstructing the human central nervous system to gain insight into what goes wrong at the cellular level when devastating disorders of the brain and spinal cord occur. This understanding may ultimately inform the treatment of these conditions," said Tsechpenakis.
In this study, which processed images and reconstructed neuronal motor circuitry in the brain, the researchers, who included two Indianapolis high school students—Rachel Stephens and Tiange (Tony) Qu—collected and analyzed data on minute structures over various developmental stages, efforts linking neuroscience and computer science.
"Both high school students who worked on this study performed neuroscience and computation efforts similar to that conducted elsewhere by graduate students. It was impressive to see what sophisticated and key work they could—with mentoring—do," said Tsechpenakis.
Qu said the work was initially rather scary and intimidating but that he rapidly grew to appreciate the opportunity to work in the School of Science lab. “Unlike high school, we were not told how to get from point A to point B. Dr. Tsechpenakis explained what point A and B were and taught us how to figure out how to get from A to B.” 
Qu, a 17-year-old senior at Ben Davis High School, now sees neuroscience as a potential college major with biomedical research as an eventual career goal. He continues to work in the lab after school focusing on change over time in fruit fly larvae motor neurons.
Stephens, a senior at North Central High School, said she enjoyed the collaborative nature of the research, with computer scientists and life scientists working together on a problem.
"Dr. Tsechpenakis made it clear to us that different perspectives are necessary, and the ability to think about a problem is more valuable than the education and training you’ve had,” she said. “Before I joined the lab I hadn’t really thought about how computer science could help heal." The 17-year-old plans a pre-med major in college and a career as a physician.

Researchers Use Computers to “See” Neurons to Better Understand Brain Function

A study conducted by local high school students and faculty from the Department of Computer and Information Science in the School of Science at Indiana University-Purdue University Indianapolis reveals new information about the motor circuits of the brain that may one day help those developing therapies to treat conditions such as stroke, schizophrenia, spinal cord injury or Alzheimer’s disease.

"MRI and CAT scans of the human brain can tell us many things about the structure of this most complicated of organs, formed of trillions of neurons and the synapses via which they communicate. But we are a long way away from having imaging techniques that can show single neurons in a complex brain like the human brain," said Gavriil Tsechpenakis, Ph.D., assistant professor of computer science in the School of Science at IUPUI.

"But using the tools of artificial intelligence, specifically computer vision and image processing, we are able to visualize and process actual neurons of model organisms. Our work in the brain of a model organism—the fruit fly—will help us and other researchers move forward to more complex organisms with the ultimate goal of reconstructing the human central nervous system to gain insight into what goes wrong at the cellular level when devastating disorders of the brain and spinal cord occur. This understanding may ultimately inform the treatment of these conditions," said Tsechpenakis.

In this study, which processed images and reconstructed neuronal motor circuitry in the brain, the researchers, who included two Indianapolis high school students—Rachel Stephens and Tiange (Tony) Qu—collected and analyzed data on minute structures over various developmental stages, efforts linking neuroscience and computer science.

"Both high school students who worked on this study performed neuroscience and computation efforts similar to that conducted elsewhere by graduate students. It was impressive to see what sophisticated and key work they could—with mentoring—do," said Tsechpenakis.

Qu said the work was initially rather scary and intimidating but that he rapidly grew to appreciate the opportunity to work in the School of Science lab. “Unlike high school, we were not told how to get from point A to point B. Dr. Tsechpenakis explained what point A and B were and taught us how to figure out how to get from A to B.” 

Qu, a 17-year-old senior at Ben Davis High School, now sees neuroscience as a potential college major with biomedical research as an eventual career goal. He continues to work in the lab after school focusing on change over time in fruit fly larvae motor neurons.

Stephens, a senior at North Central High School, said she enjoyed the collaborative nature of the research, with computer scientists and life scientists working together on a problem.

"Dr. Tsechpenakis made it clear to us that different perspectives are necessary, and the ability to think about a problem is more valuable than the education and training you’ve had,” she said. “Before I joined the lab I hadn’t really thought about how computer science could help heal." The 17-year-old plans a pre-med major in college and a career as a physician.

Filed under motor neurons neuroimaging neurons neuroscience science

free counters