Neuroscience

Articles and news from the latest research reports.

31 notes

Scientists find 6 new genetic risk factors for Parkinson’s 
Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.
"Unraveling the genetic underpinnings of Parkinson’s is vital to understanding the multiple mechanisms involved in this complex disease, and hopefully, may one day lead to effective therapies," said Andrew Singleton, Ph.D., a scientist at the NIH’s National Institute on Aging (NIA) and senior author of the study.
Dr. Singleton and his colleagues collected and combined data from existing genome-wide association studies (GWAS), which allow scientists to find common variants, or subtle differences, in the genetic codes of large groups of individuals. The combined data included approximately 13,708 Parkinson’s disease cases and 95,282 controls, all of European ancestry.
The investigators identified potential genetic risk variants, which increase the chances that a person may develop Parkinson’s disease. Their results suggested that the more variants a person has, the greater the risk, up to three times higher, for developing the disorder in some cases.
"The study brought together a large international group of investigators from both public and private institutions who were interested in sharing data to accelerate the discovery of genetic risk factors for Parkinson’s disease," said Margaret Sutherland, Ph.D., a program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of NIH. "The advantage of this collaborative approach is highlighted in the identification of pathways and gene networks that may significantly increase our understanding of Parkinson’s disease."
To obtain the data, the researchers collaborated with multiple public and private organizations, including the U.S. Department of Defense, the Michael J. Fox Foundation, 23andMe and many international investigators.
Affecting millions of people worldwide, Parkinson’s disease is a degenerative disorder that causes movement problems, including trembling of the hands, arms, or legs, stiffness of limbs and trunk, slowed movements and problems with posture. Over time, patients may have difficulty walking, talking, or completing other simple tasks. Although nine genes have been shown to cause rare forms of Parkinson’s disease, scientists continue to search for genetic risk factors to provide a complete genetic picture of the disorder.
The researchers confirmed the results in another sample of subjects, including 5,353 patients and 5,551 controls. By comparing the genetic regions to sequences on a state-of-the-art gene chip called NeuroX, the researchers confirmed that 24 variants represent genetic risk factors for Parkinson’s disease, including six variants that had not been previously identified. The NeuroX gene chip contains the codes of approximately 24,000 common genetic variants thought to be associated with a broad spectrum of neurodegenerative disorders.
"The replication phase of the study demonstrates the utility of the NeuroX chip for unlocking the secrets of neurodegenerative disorders," said Dr. Sutherland. "The power of these high tech, data-driven genomic methods allows scientists to find the needle in the haystack that may ultimately lead to new treatments."
Some of the newly identified genetic risk factors are thought to be involved with Gaucher’s disease, regulating inflammation and the nerve cell chemical messenger dopamine as well as alpha-synuclein, a protein that has been shown to accumulate in the brains of some cases of Parkinson’s disease. Further research is needed to determine the roles of the variants identified in this study.

Scientists find 6 new genetic risk factors for Parkinson’s

Using data from over 18,000 patients, scientists have identified more than two dozen genetic risk factors involved in Parkinson’s disease, including six that had not been previously reported. The study, published in Nature Genetics, was partially funded by the National Institutes of Health (NIH) and led by scientists working in NIH laboratories.

"Unraveling the genetic underpinnings of Parkinson’s is vital to understanding the multiple mechanisms involved in this complex disease, and hopefully, may one day lead to effective therapies," said Andrew Singleton, Ph.D., a scientist at the NIH’s National Institute on Aging (NIA) and senior author of the study.

Dr. Singleton and his colleagues collected and combined data from existing genome-wide association studies (GWAS), which allow scientists to find common variants, or subtle differences, in the genetic codes of large groups of individuals. The combined data included approximately 13,708 Parkinson’s disease cases and 95,282 controls, all of European ancestry.

The investigators identified potential genetic risk variants, which increase the chances that a person may develop Parkinson’s disease. Their results suggested that the more variants a person has, the greater the risk, up to three times higher, for developing the disorder in some cases.

"The study brought together a large international group of investigators from both public and private institutions who were interested in sharing data to accelerate the discovery of genetic risk factors for Parkinson’s disease," said Margaret Sutherland, Ph.D., a program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of NIH. "The advantage of this collaborative approach is highlighted in the identification of pathways and gene networks that may significantly increase our understanding of Parkinson’s disease."

To obtain the data, the researchers collaborated with multiple public and private organizations, including the U.S. Department of Defense, the Michael J. Fox Foundation, 23andMe and many international investigators.

Affecting millions of people worldwide, Parkinson’s disease is a degenerative disorder that causes movement problems, including trembling of the hands, arms, or legs, stiffness of limbs and trunk, slowed movements and problems with posture. Over time, patients may have difficulty walking, talking, or completing other simple tasks. Although nine genes have been shown to cause rare forms of Parkinson’s disease, scientists continue to search for genetic risk factors to provide a complete genetic picture of the disorder.

The researchers confirmed the results in another sample of subjects, including 5,353 patients and 5,551 controls. By comparing the genetic regions to sequences on a state-of-the-art gene chip called NeuroX, the researchers confirmed that 24 variants represent genetic risk factors for Parkinson’s disease, including six variants that had not been previously identified. The NeuroX gene chip contains the codes of approximately 24,000 common genetic variants thought to be associated with a broad spectrum of neurodegenerative disorders.

"The replication phase of the study demonstrates the utility of the NeuroX chip for unlocking the secrets of neurodegenerative disorders," said Dr. Sutherland. "The power of these high tech, data-driven genomic methods allows scientists to find the needle in the haystack that may ultimately lead to new treatments."

Some of the newly identified genetic risk factors are thought to be involved with Gaucher’s disease, regulating inflammation and the nerve cell chemical messenger dopamine as well as alpha-synuclein, a protein that has been shown to accumulate in the brains of some cases of Parkinson’s disease. Further research is needed to determine the roles of the variants identified in this study.

Filed under parkinson's disease GWAS NeuroX genetics neuroscience science

85 notes

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)
New Tools Help Neuroscientists Analyze Big Data
In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.
New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.
Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.
Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.
Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder
New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.
It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.
Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”
That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.
Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.
To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.
From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.
The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”
Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.
For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.
Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.
In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.
Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.
Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

(Image caption: Techniques known as dimensionality reduction can help find patterns in the recorded activity of thousands of neurons. Rather than look at all responses at once, these methods find a smaller set of dimensions — in this case three — that capture as much structure in the data as possible. Each trace in these graphics represents the activity of the whole brain during a single presentation of a moving stimulus, and different versions of the analysis capture structure related either to the passage of time (left) or the direction of the motion (right). The raw data is the same in both cases, but the analyses finds different patterns. Credit: Jeremy Freeman, Nikita Vladimirov, Takashi Kawashima, Yu Mu, Nicholas Sofroniew, Davis Bennett, Joshua Rosen, Chao-Tsung Yang, Loren Looger, Philipp Keller, Misha Ahrens)

New Tools Help Neuroscientists Analyze Big Data

In an age of “big data,” a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyze a massive data set together. It’s how Facebook and Google mine your web history to present you with targeted ads, and how Amazon and Netflix recommend your next favorite book or movie. But big data is about more than just marketing.

New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works – but only if researchers can interpret it. To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute’s Janelia Research Campus.

Thunder speeds the analysis of data sets that are so large and complex they would take days or weeks to analyze on a single workstation – if a single workstation could do it at all. Janelia group leaders Jeremy Freeman, Misha Ahrens, and other colleagues at Janelia and the University of California, Berkeley, report in the July 27, 2014, issue of the journal Nature Methods that they have used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebrafish and mice with multiple imaging techniques.

Importantly, they have used Thunder to analyze imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli. That technology is described in a companion paper published in the same issue of Nature Methods.

Thunder can run on a private cluster or on Amazon’s cloud computing services. Researchers can find everything they need to begin using the open source library of tools at http://freeman-lab.github.io/thunder

New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before. Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours. “For a lot of these data sets, a single machine is just not going to cut it,” Freeman says.

It’s not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity. “When you record information from the brain, you don’t know the best way to get the information that you need out of it. Every data set is different. You have ideas, but whether or not they generate insights is an open question until you actually apply them,” says Ahrens.

Neuroscientists rarely arrive at new insights about the brain the first time they consider their data, he explains. Instead, an initial analysis may hint at a more promising approach, and with a few adjustments and a new computational analysis, the data may begin to look more meaningful. “Being able to apply these analyses quickly — one after the other — is important. Speed gives a researcher more flexibility to explore and get new ideas.”

That’s why trying to analyze neuroscience data with slow computational tools can be so frustrating. “For some analyses, you can load the data, start it running, and then come back the next day,” Freeman says. “But if you need to tweak the analysis and run it again, then you have to wait another night.” For larger data sets, the lag time might be weeks or months.

Distributed computing was an obvious solution to accelerate analysis while exploring the full richness of a data set, but many alternatives are available. Freeman chose to build on a new platform called Spark. Developed at the University of California, Berkeley’s AMPLab, Spark is rapidly becoming a favored tool for large-scale computing across industry, Freeman says. Spark’s capabilities for data caching eliminates the bottleneck of loading a complete data set for all but the initial step, making it well-suited for interactive, exploratory analysis, and for complex algorithms requiring repeated operations on the same data. And Spark’s elegant and versatile application programming interfaces (APIs) help simplify development. Thunder uses the Python API, which Freeman hopes will make it particularly easy for others to adopt, given Python’s increasing use in neuroscience and data science.

To make Spark suitable for analyzing a broad range of neuroscience data – information about connectivity and activity collected from different organisms and with different techniques – Freeman first developed standardized representations of data that were amenable to distributed computing. He then worked to express typical neuroscience workflows into the computational language of Spark.

From there, he says, the biological questions that he and his colleagues were curious about drove development. “We started with our questions about the biology, then came up with the analyses and developed the tools,” he says.

The result is a modular set of tools that will expand as the Janelia team — and the neuroscience community — add new components. “The analyses we developed are building blocks,” says Ahrens. “The development of new analyses for interpreting large-scale recording is an active field and goes hand-in-hand with the development of resources for large-scale computing and imaging. The algorithms in our paper are a starting point.”

Using Thunder, Freeman, Ahrens, and their colleagues analyzed images of the brain in minutes, interacting with and revising analyses without the lengthy delays associated with previous methods. In images taken of a mouse brain with a two-photon microscope, for example, the team found cells in the brain whose activity varied with running speed.

For analyzing much larger data sets, tools such as Thunder are not just helpful, they are essential, the scientists say. This is true for the information collected by the new microscope that Ahrens and colleagues developed for monitoring whole-brain activity in response to visual stimuli.

Last year, Ahrens and Janelia group leader Phillip Keller used high-speed light-sheet imaging to engineer a microscope that captures neuronal activity cell by cell across nearly the entire brain of a larval zebrafish. That microscope produced stunning images of neurons in the zebrafish brain firing while the fish was inactive. But Ahrens wanted to use the technology to study the brain’s activity in more complex situations. Now, the team has combined their original technology with a virtual-reality swim simulator that Ahrens previously developed to provide fish with visual feedback that simulates movement.

In a light sheet microscope, a sheet of laser light scans across a sample, illuminating a thin section at a time. To enable a fish in the microscope to see and respond to its virtual-reality environment, Ahrens’ team needed to protect its eyes. So they programmed the laser to quickly shut off when its light sheet approaches the eye and restart once the area is cleared. Then they introduced a second laser that scans the sample from a different angle to ensure that the region of the brain behind the eyes is imaged. Together, the two lasers image the brain with nearly complete coverage without interfering with the animal’s vision.

Combining these two technologies lets Ahrens monitor activity throughout the brain as a fish adjusts its behavior based on the sensory information it receives. The technique generates about a terabyte of data in an hour – presenting a data analysis challenge that helped motivate the development of Thunder. When Freeman and Ahrens applied their new tools to the data, patterns quickly emerged. As examples, they identified cells whose activity was associated with movement in particular directions and cells that fired specifically when the fish was at rest, and were able to characterize the dynamics of those cells’ activities. Example analyses like these, and example data sets, are available at the website http://research.janelia.org/zebrafish/.

Ahrens now plans to explore more complex questions using the new technology, and both he and Freeman foresee expansion of Thunder. “At every level, this is really just the beginning,” Freeman says.

Filed under brain activity zebrafish Thunder computational analysis neuroscience science

549 notes

What sign language teaches us about the brain
The world’s leading humanoid robot, ASIMO, has recently learnt sign language. The news of this breakthrough came just as I completed Level 1 of British Sign Language (I dare say it took me longer to master signing than it did the robot!). As a neuroscientist, the experience of learning to sign made me think about how the brain perceives this means of communicating.
For instance, during my training, I found that mnemonics greatly simplified my learning process. To sign the colour blue you use the fingers of your right hand to rub the back of your left hand, my simple mnemonic for this sign being that the veins on the back of our hand appear blue. I was therefore forming an association between the word blue (English), the sign for blue (BSL), and the visual aid that links the two. However, the two languages differ markedly in that one relies on sounds and the other on visual signs.
Do our brains process these languages differently? It seems that for the most part, they don’t. And it turns out that brain studies of sign language users have helped bust a few myths.
Read more

What sign language teaches us about the brain

The world’s leading humanoid robot, ASIMO, has recently learnt sign language. The news of this breakthrough came just as I completed Level 1 of British Sign Language (I dare say it took me longer to master signing than it did the robot!). As a neuroscientist, the experience of learning to sign made me think about how the brain perceives this means of communicating.

For instance, during my training, I found that mnemonics greatly simplified my learning process. To sign the colour blue you use the fingers of your right hand to rub the back of your left hand, my simple mnemonic for this sign being that the veins on the back of our hand appear blue. I was therefore forming an association between the word blue (English), the sign for blue (BSL), and the visual aid that links the two. However, the two languages differ markedly in that one relies on sounds and the other on visual signs.

Do our brains process these languages differently? It seems that for the most part, they don’t. And it turns out that brain studies of sign language users have helped bust a few myths.

Read more

Filed under sign language neuroimaging communication lesion studies neuroscience science

79 notes

Slow Walking Speed and Memory Complaints Can Predict Dementia

A study involving nearly 27,000 older adults on five continents found that nearly 1 in 10 met criteria for pre-dementia based on a simple test that measures how fast people walk and whether they have cognitive complaints. People who tested positive for pre-dementia were twice as likely as others to develop dementia within 12 years. The study, led by scientists at Albert Einstein College of Medicine of Yeshiva University and Montefiore Medical Center, was published online on July 16, 2014 in Neurology®, the medical journal of the American Academy of Neurology.

image

The new test diagnoses motoric cognitive risk syndrome (MCR). Testing for the newly described syndrome relies on measuring gait speed (our manner of walking) and asking a few simple questions about a patient’s cognitive abilities, both of which take just seconds. The test is not reliant on the latest medical technology and can be done in a clinical setting, diagnosing people in the early stages of the dementia process. Early diagnosis is critical because it allows time to identify and possibly treat the underlying causes of the disease, which may delay or even prevent the onset of dementia in some cases.

“In many clinical and community settings, people don’t have access to the sophisticated tests—biomarker assays, cognitive tests or neuroimaging studies—used to diagnose people at risk for developing dementia,” said Joe Verghese, M.B.B.S., professor in the Saul R. Korey Department of Neurology and of medicine at Einstein, chief of geriatrics at Einstein and Montefiore, and senior author of the Neurology paper. “Our assessment method could enable many more people to learn if they’re at risk for dementia, since it avoids the need for complex testing and doesn’t require that the test be administered by a neurologist. The potential payoff could be tremendous—not only for individuals and their families, but also in terms of healthcare savings for society. All that’s needed to assess MCR is a stopwatch and a few questions, so primary care physicians could easily incorporate it into examinations of their older patients.”

The U.S. Centers for Disease Control and Prevention estimates that up to 5.3 million Americans—about 1 in 9 people age 65 and over—have Alzheimer’s disease, the most common type of dementia. That number is expected to more than double by 2050 due to population aging.

“As a young researcher, I examined hundreds of patients and noticed that if an older person was walking slowly, there was a good chance that his cognitive tests were also abnormal,” said Dr. Verghese, who is also the Murray D. Gross Memorial Faculty Scholar in Gerontology at Einstein. “This gave me the idea that perhaps we could use this simple clinical sign—how fast someone walks—to predict who would develop dementia. In a 2002 New England Journal of Medicine study, we reported that abnormal gait patterns accurately predict whether people will go on to develop dementia. MCR improves on the slow gait concept by evaluating not only patients’ gait speed but also whether they have cognitive complaints.”

The Neurology paper reported on the prevalence of MCR among 26,802 adults without dementia or disability aged 60 years and older enrolled in 22 studies in 17 countries. A significant number of adults—9.7 percent—met the criteria for MCR (i.e., abnormally slow gait and cognitive complaints). While the syndrome was equally common in men and women, highly educated people were less likely to test positive for MCR compared with less-educated individuals. A slow gait, said Dr. Verghese, is a walking speed slower than about one meter per second, which is about 2.2 miles per hour (m.p.h.). Less than 0.6 meters per second (or 1.3 m.p.h.) is “clearly abnormal.”

To test whether MCR predicts future dementia, the researchers focused on four of the 22 studies that tested a total of 4,812 people for MCR and then evaluated them annually over an average follow-up period of 12 years to see which ones developed dementia. Those who met the criteria for MCR were nearly twice as likely to develop dementia over the following 12 years compared with people who did not.

Dr. Verghese emphasized that a slow gait alone is not sufficient for a diagnosis of MCR. “Walking slowly could be due to conditions such as arthritis or an inner ear problem that affects balance, which would not increase risk for dementia. To meet the criteria for MCR requires having a slow gait and cognitive problems. An example would be answering ‘yes’ to the question, ‘Do you think you have more memory problems than other people?’”

For patients meeting MCR criteria, said Dr. Verghese, the next step is to look for the causes of their slow gait and cognitive complaints. The search may reveal underlying—and controllable—problems. “Evidence increasingly suggests that brain health is closely tied to cardiovascular health—meaning that treatable conditions such as hypertension, smoking, high cholesterol, obesity and diabetes can interfere with blood flow to the brain and thereby increase a person’s risk for developing Alzheimer’s and other dementias,” said Dr. Verghese.

What about people who meet MCR criteria but no treatable underlying problems can be found?

“Even in the absence of a specific cause, we know that most healthy lifestyle factors, such as exercising and eating healthier, have been shown to reduce the rate of cognitive decline,” said Dr. Verghese. “In addition, our group has shown that cognitively stimulating activities—playing board games, card games, reading, writing and also dancing—can delay dementia’s onset. Knowing they’re at high risk for dementia can also help people and their families make arrangements for the future, which is an aspect of MCR testing that I’ve found is very important in my own clinical practice.”

Filed under dementia motoric cognitive risk syndrome gait speed cognitive decline neuroscience science

170 notes

Researchers discover that Klotho is neuroprotective against Alzheimer’s disease

Boston University School of Medicine researchers may have found a way to delay or even prevent Alzheimer’s disease (AD). They discovered that pre-treatment of neurons with the anti-aging protein Klotho can prevent neuron death in the presence of the toxic amyloid protein and glutamate. These findings currently appear in the Journal of Biological Chemistry.

Alzheimer’s disease is the most frequent age-related dementia affecting 5.4 million Americans including 13 percent of people age 65 and older and more than 40 percent of people over the age of 85. In AD the cognitive decline and dementia result from the death of nerve cells that are involved in learning and memory. The amyloid protein and the excess of the neurotransmitter, glutamate are partially responsible for the neuronal demise.

Nerve cells were grown in petri dishes and treated with or without Klotho for four hours. Amyloid or glutamate then were added to the dish for 24 hours. In the dishes where Klotho was added, a much higher percentage of neurons survived than in the dishes without Klotho.

"Finding a neuroprotective agent that will protect nerve cells from amyloid that accumulates as a function of age in the brain is novel and of major importance," explained corresponding author Carmela R. Abraham, PhD, professor of biochemistry and pharmacology at BUSM. "We now have evidence that if more Klotho is present in the brain, it will protect the neurons from the oxidative stress induced by amyloid and glutamate.

According to the researchers, Klotho is a large protein that cannot penetrate the blood brain barrier so it can’t be administered by mouth or injection. However in a separate study the researchers have identified small molecules that can enter the brain and increase the levels of Klotho. “We believe that increasing Klotho levels with such compounds would improve the outcome for Alzheimer’s patients, and if started early enough would prevent further deterioration. This potential treatment has implications for other neurodegenerative diseases such as Parkinson’s, Huntington’s, ALS and brain trauma, as well,” added Abraham.

(Source: eurekalert.org)

Filed under klotho alzheimer's disease neuroprotection glutamate oxidative stress neuroscience science

114 notes

Anti-inflammatory drug can prevent neuron loss in Parkinson’s model

An experimental anti-inflammatory drug can protect vulnerable neurons and reduce motor deficits in a rat model of Parkinson’s disease, researchers at Emory University School of Medicine have shown.

The results were published Thursday, July 24 in the Journal of Parkinson’s Disease.

image

The findings demonstrate that the drug, called XPro1595, can reach the brain at sufficient levels and have beneficial effects when administered by subcutaneous injection, like an insulin shot. Previous studies of XPro1595 in animals tested more invasive modes of delivery, such as direct injection into the brain.

“This is an important step forward for anti-inflammatory therapies for Parkinson’s disease,” says Malu Tansey, PhD, associate professor of physiology at Emory University School of Medicine. “Our results provide a compelling rationale for moving toward a clinical trial in early Parkinson’s disease patients.”

The new research on subcutaneous administration of XPro1595 was funded by the Michael J. Fox Foundation for Parkinson’s Research (MJFF). XPro1595 is licensed by FPRT Bio, and is seeking funding for a clinical trial to test its efficacy in the early stages of Parkinson’s disease.

“We are proud to have supported this work and glad to see positive pre-clinical results,” said Marco Baptista, PhD, MJFF associate director of research programs. “A therapy that could slow Parkinson’s progression would be a game changer for the millions living with this disease, and this study is a step in that direction.”

In addition, Tansey and Yoland Smith, PhD, from Yerkes National Primate Research Center, were awarded a grant this week from the Parkinson’s Disease Foundation to test XPro1595 in a non-human primate model of Parkinson’s.

Evidence has been piling up that inflammation is an important mechanism driving the progression of Parkinson’s disease. XPro1595 targets tumor necrosis factor (TNF), a critical inflammatory signaling molecule, and is specific to the soluble form of TNF. This specificity would avoid compromising immunity to infections, a known side effect of existing anti-TNF drugs used to treat disorders such as rheumatoid arthritis.

“Inflammation is probably not the initiating event in Parkinson’s disease, but it is important for the neurodegeneration that follows,” Tansey says. “That’s why we believe that an anti-inflammatory agent, such as one that counteracts soluble TNF, could substantially slow the progression of the disease.”

Postdoctoral fellow Christopher Barnum, PhD and colleagues used a model of Parkinson’s disease in rats in which the neurotoxin 6-hydroxydopamine (6-OHDA) is injected into only one side of the brain. This reproduces some aspects of Parkinson’s disease: neurons that produce dopamine in the injected side of the brain die, leading to impaired movement on the opposite side of the body.

When XPro1595 is given to the animals 3 days after 6-OHDA injection, just 15 percent of the dopamine-producing neurons were lost five weeks later. That compares to controls in which 55 percent of the same neurons were lost. By reducing dopamine neuron loss with XPro1595, the researchers were also able to reduce motor impairment. In fact, the degree of dopamine cell loss was highly correlated both with the degree of motor impairment and immune cell activation.

When XPro1595 is given two weeks after injection, 44 percent of the vulnerable neurons are still lost, suggesting that there is a limited window of opportunity to intervene.

“Recent clinical studies indicates there is a four or five year window between diagnosis of Parkinson’s disease and the time when the maximum number of vulnerable neurons are lost,” Dr. Tansey says. “If this is true, and if inflammation is playing a key role during this window, then we might be able to slow or halt the progression of Parkinson’s with a treatment like XPro1595.”

(Source: news.emory.edu)

Filed under parkinson's disease substantia nigra inflammation microglia astrocytes neuroscience science

126 notes

Experiences at every stage of life contribute to cognitive abilities in old age

Early life experiences, such as childhood socioeconomic status and literacy, may have greater influence on the risk of cognitive impairment late in life than such demographic characteristics as race and ethnicity, a large study by researchers with the UC Davis Alzheimer’s Disease Center and the University of Victoria, Canada, has found.

image

“Declining cognitive function in older adults is a major personal and public health concern,” said Bruce Reed professor of neurology and associate director of the UC Davis Alzheimer’s Disease Center.

“But not all people lose cognitive function, and understanding the remarkable variability in cognitive trajectories as people age is of critical importance for prevention, treatment and planning to promote successful cognitive aging and minimize problems associated with cognitive decline.”

The study, “Life Experiences and Demographic Influences on Cognitive Function in Older Adults,” is published online in Neuropsychology, a journal of the American Psychological Association. It is one of the first comprehensive examinations of the multiple influences of varied demographic factors early in life and their relationship to cognitive aging.

The research was conducted in a group of over 300 diverse men and women who spoke either English or Spanish. They were recruited from senior citizen social, recreational and residential centers, as well as churches and health-care settings. At the time of recruitment, all study participants were 60 or older, and had no major psychiatric illnesses or life threatening medical illnesses. Participants were Caucasian, African-American or Hispanic.

The extensive testing included multidisciplinary diagnostic evaluations through the UC Davis Alzheimer’s Disease Center in either English or Spanish, which permitted comparisons across a diverse cohort of participants.

Consistent with previous research, the study found that non-Latino Caucasians scored 20 to 25 percent higher on tests of semantic memory (general knowledge) and 13 to 15 percent higher on tests of executive functioning compared to the other ethnic groups. However, ethnic differences in executive functioning disappeared and differences in semantic memory were reduced by 20 to 30 percent when group differences in childhood socioeconomic status, adult literacy and extent of physical activity during adulthood were considered. 

“This study is unusual in that it examines how many different life experiences affect cognitive decline in late life,” said Dan Mungas, professor of neurology and associate director of the UC Davis Alzheimer’s Disease Research Center. 

“It shows that variables like ethnicity and years of education that influence cognitive test scores in a single evaluation are not associated with rate of cognitive decline, but that specific life experiences like level of reading attainment and intellectually stimulating activities are predictive of the rate of late-life cognitive decline. This suggests that intellectual stimulation throughout the life span can reduce cognitive decline in old age.”

Regardless of ethnicity, advanced age and apolipoprotein-E (APOE genotype) were associated with increased cognitive decline over an average of four years that participants were followed. APOE is the largest known genetic risk factor for late-onset Alzheimer’s. Less decline was experienced by persons who reported more engagement in recreational activities in late life and who maintained their levels of activity engagement from middle age to old age. Single-word reading — the ability to decode a word on sight, which often is considered an indication of quality of educational experience — was also associated with less cognitive decline, a finding that was true for both English and Spanish readers, irrespective of their race or ethnicity. These findings suggest that early life experiences affect late-life cognition indirectly, through literacy and late-life recreational pursuits, the authors said.

“These findings are important,” explained Paul Brewster, lead author of the study, a doctoral student at the University of Victoria, Canada, and a pre-doctoral psychology intern at the UC San Diego Department of Psychiatry, “because it challenges earlier research that suggests associations between race and ethnicity, particularly among Latinos, and an increased risk of late-life cognitive impairment and dementia.

”Our findings suggest that the influences of demographic factors on late-life cognition may be reflective of broader socioeconomic factors, such as educational opportunity and related differences in physical and mental activity across the life span.”

(Source: ucdmc.ucdavis.edu)

Filed under alzheimer's disease cognitive impairment life experience apoE4 psychology neuroscience science

74 notes

Division of labour in the fish brain
For a fish to swim forward, the nerve cells, or neurons, in its brain and spine have to control the swishing movements of its tail with very close coordination. However, the posture of the tail, which determines swimming direction somewhat like a rudder, also needs to be fine-tuned by the brain’s activity. Using the innovative method of optogenetics, scientists from the Max Planck Institute of Neurobiology in Martinsried have now identified a group of only about 15 nerve cells which steer the movements of the tail fin. Movements of the human body are also controlled via nerve pathways in the same region of the brain, which may therefore use processing mechanisms similar to those in fish.
For a long time, neurobiologists have been trying to find out how neuronal networks control both animal and human behaviour. In this context, there is controversy as to whether the brain’s organisation is decentralised as opposed to modular. In decentralised organisation, the interaction of a large number of neurons produces a specific behaviour pattern. If this is the case, individual neurons cannot be assigned an exact function. On the other hand, if the brain has a modular structure, individual regions might possess certain competencies, each making a specific contribution to behaviour. These types of neuronal circuit modules could be combined in many ways and influence a broad range of different behavioural responses.
Switches in the fish brain?
Researchers in Herwig Baier’s Group at the Max Planck Institute of Neurobiology want to get to the bottom of the brain’s organisational structure with the aid of zebrafish larvae. A network known as the descending reticular formation is located in the brainstem of these animals. The neurons of that region are optimally suited for studying the organisation of the brain: the cells are in direct contact with motor neurons in the spinal cord of the fish and can thus directly influence tail movements. “The reticular formation is a like a ‘cockpit’ for the fish, and we asked ourselves whether there are individual ‘switches’ or ‘joysticks‘, which are used to control the movements of the tail”, is how Herwig Baier summarises this challenge.
In their search for these switches, the researchers concentrated on a small brain nucleus (nMLF) within the reticular formation. But how can the influence of individual nMLF neurons on tail movements be studied? It is only recently that such investigations even became a possibility. Using the new method of optogenetics, the activity of nerve cells can be influenced with light. Since a zebrafish larva – including its brain – is transparent, scientists can very accurately “switch on” small sets of genetically modified cells by exposing the larva to blue light. Consequently, tail movements that are induced in this way can be attributed to identified neurons.
Neurons and tillers
The first series of tests showed that the cells of the nMLF region seem to be involved in a variety of movements – from forward propulsion to rotational motion. A second experimental series using optogenetic stimulation, however, suggested that the cells control the deflection of the tail in particular. Are the nMLF cells thus part of a multifunctional centre or are they truly specialised to perform certain functions? To resolve this question, the neurobiologists performed another set of trials in which they very specifically removed small sets of nMLF cells from the circuit. “This experiment gave us our breakthrough”, recalls Tod Thiele, lead author of the now published study.
The results show that, while nMLF cells are active in many aspects of swimming, a subset of these neurons contribute to only one part of the movement: they determine swimming direction through the posture of the tail. Thus, this population of neurons in the nMLF region are more akin to a specialised module within a decentralised control system of the swimming apparatus. Herwig Baier explains it like this: “We can compare the whole setup with the propulsion of a motorboat”. The boat’s engine, which drives the propeller, determines the thrust, whereas the tiller steers the boat. It seems that the tasks in the brain are divided up in a very similar way.
Some time ago, Herwig Baier’s team discovered a small region in the hindbrain, which acts like an engine and propels the fish forwards. “With the nMLF cells, we have now also found the tiller in the fish brain”, says Herwig Baier. In the human brain, movements are also controlled by a multitude of nuclei in the reticular formation. The study therefore suggests that the allocation of tasks in our brain could be similar to that of the zebrafish.

Division of labour in the fish brain

For a fish to swim forward, the nerve cells, or neurons, in its brain and spine have to control the swishing movements of its tail with very close coordination. However, the posture of the tail, which determines swimming direction somewhat like a rudder, also needs to be fine-tuned by the brain’s activity. Using the innovative method of optogenetics, scientists from the Max Planck Institute of Neurobiology in Martinsried have now identified a group of only about 15 nerve cells which steer the movements of the tail fin. Movements of the human body are also controlled via nerve pathways in the same region of the brain, which may therefore use processing mechanisms similar to those in fish.

For a long time, neurobiologists have been trying to find out how neuronal networks control both animal and human behaviour. In this context, there is controversy as to whether the brain’s organisation is decentralised as opposed to modular. In decentralised organisation, the interaction of a large number of neurons produces a specific behaviour pattern. If this is the case, individual neurons cannot be assigned an exact function. On the other hand, if the brain has a modular structure, individual regions might possess certain competencies, each making a specific contribution to behaviour. These types of neuronal circuit modules could be combined in many ways and influence a broad range of different behavioural responses.

Switches in the fish brain?

Researchers in Herwig Baier’s Group at the Max Planck Institute of Neurobiology want to get to the bottom of the brain’s organisational structure with the aid of zebrafish larvae. A network known as the descending reticular formation is located in the brainstem of these animals. The neurons of that region are optimally suited for studying the organisation of the brain: the cells are in direct contact with motor neurons in the spinal cord of the fish and can thus directly influence tail movements. “The reticular formation is a like a ‘cockpit’ for the fish, and we asked ourselves whether there are individual ‘switches’ or ‘joysticks‘, which are used to control the movements of the tail”, is how Herwig Baier summarises this challenge.

In their search for these switches, the researchers concentrated on a small brain nucleus (nMLF) within the reticular formation. But how can the influence of individual nMLF neurons on tail movements be studied? It is only recently that such investigations even became a possibility. Using the new method of optogenetics, the activity of nerve cells can be influenced with light. Since a zebrafish larva – including its brain – is transparent, scientists can very accurately “switch on” small sets of genetically modified cells by exposing the larva to blue light. Consequently, tail movements that are induced in this way can be attributed to identified neurons.

Neurons and tillers

The first series of tests showed that the cells of the nMLF region seem to be involved in a variety of movements – from forward propulsion to rotational motion. A second experimental series using optogenetic stimulation, however, suggested that the cells control the deflection of the tail in particular. Are the nMLF cells thus part of a multifunctional centre or are they truly specialised to perform certain functions? To resolve this question, the neurobiologists performed another set of trials in which they very specifically removed small sets of nMLF cells from the circuit. “This experiment gave us our breakthrough”, recalls Tod Thiele, lead author of the now published study.

The results show that, while nMLF cells are active in many aspects of swimming, a subset of these neurons contribute to only one part of the movement: they determine swimming direction through the posture of the tail. Thus, this population of neurons in the nMLF region are more akin to a specialised module within a decentralised control system of the swimming apparatus. Herwig Baier explains it like this: “We can compare the whole setup with the propulsion of a motorboat”. The boat’s engine, which drives the propeller, determines the thrust, whereas the tiller steers the boat. It seems that the tasks in the brain are divided up in a very similar way.

Some time ago, Herwig Baier’s team discovered a small region in the hindbrain, which acts like an engine and propels the fish forwards. “With the nMLF cells, we have now also found the tiller in the fish brain”, says Herwig Baier. In the human brain, movements are also controlled by a multitude of nuclei in the reticular formation. The study therefore suggests that the allocation of tasks in our brain could be similar to that of the zebrafish.

Filed under zebrafish optogenetics motor control postural control midbrain nMLF neuroscience science

250 notes

A weighty discovery
Humans have developed sophisticated concepts like mass and gravity to explain a wide range of everyday phenomena, but scientists have remarkably little understanding of how such concepts are represented by the brain.

Using advanced neuroimaging techniques, Queen’s University researchers have revealed how the brain stores knowledge about an object’s weight – information critical to our ability to successfully grasp and interact with objects in our environment.
Jason Gallivan, a Banting postdoctoral fellow in the Department of Psychology, and Randy Flanagan, a professor in the Department of Psychology, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain represent an object’s weight prior to lifting that object. They found that knowledge of object weight is stored in ventral visual cortex, a brain region previously thought to only represent those properties of an object that can be directly viewed such as its size, shape, location and texture.

“We are working on various projects to determine how the brain produces actions on the world,” explains Dr. Gallivan about the work he is undertaking at the Centre for Neuroscience Studies at Queen’s. “Simply looking at an object doesn’t provide the brain with information about how much that object weighs. Take for example a suitcase. There is often nothing about its visual appearance that informs you of whether it is packed with clothes or empty. Rather, this is information that must be derived through recent interactions with that object and stored in the brain so as to guide our movements the next time we must lift and interact with that object.”

According to previous research, the ventral visual cortex supports visual processing for perception and object recognition whereas the dorsal visual cortex supports visual processing for the control of action. However, this division of labour had only been tested for visually guided actions like reaching, which are directed towards objects, and not for actions involving the manipulation of objects, which requires access to stored knowledge about object properties.

“Because information about object weight is primarily important for the control of action, we thought that this information might only be stored in motor-related areas of the brain,” says Dr. Gallivan. “Surprisingly, however, we found that this non-visual information was also stored in ventral visual cortex. Presumably this allows for the weight of an object to become easily associated with its visual properties.”

In ongoing research, Drs. Gallivan and Flanagan are using transcranial magnetic stimulation (TMS) to temporarily disrupt targeted brain areas in order to assess their contribution to skilled object manipulation. By identifying which areas of the brain control certain motor skills, Drs. Gallivan and Flanagan’s research will be helpful in assessing patients with neurological impairments including stroke.
The work was funded by the Canadian Institutes of Health Research (CIHR). The research was recently published in Current Biology.

A weighty discovery

Humans have developed sophisticated concepts like mass and gravity to explain a wide range of everyday phenomena, but scientists have remarkably little understanding of how such concepts are represented by the brain.

Using advanced neuroimaging techniques, Queen’s University researchers have revealed how the brain stores knowledge about an object’s weight – information critical to our ability to successfully grasp and interact with objects in our environment.

Jason Gallivan, a Banting postdoctoral fellow in the Department of Psychology, and Randy Flanagan, a professor in the Department of Psychology, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain represent an object’s weight prior to lifting that object. They found that knowledge of object weight is stored in ventral visual cortex, a brain region previously thought to only represent those properties of an object that can be directly viewed such as its size, shape, location and texture.

“We are working on various projects to determine how the brain produces actions on the world,” explains Dr. Gallivan about the work he is undertaking at the Centre for Neuroscience Studies at Queen’s. “Simply looking at an object doesn’t provide the brain with information about how much that object weighs. Take for example a suitcase. There is often nothing about its visual appearance that informs you of whether it is packed with clothes or empty. Rather, this is information that must be derived through recent interactions with that object and stored in the brain so as to guide our movements the next time we must lift and interact with that object.”

According to previous research, the ventral visual cortex supports visual processing for perception and object recognition whereas the dorsal visual cortex supports visual processing for the control of action. However, this division of labour had only been tested for visually guided actions like reaching, which are directed towards objects, and not for actions involving the manipulation of objects, which requires access to stored knowledge about object properties.

“Because information about object weight is primarily important for the control of action, we thought that this information might only be stored in motor-related areas of the brain,” says Dr. Gallivan. “Surprisingly, however, we found that this non-visual information was also stored in ventral visual cortex. Presumably this allows for the weight of an object to become easily associated with its visual properties.”

In ongoing research, Drs. Gallivan and Flanagan are using transcranial magnetic stimulation (TMS) to temporarily disrupt targeted brain areas in order to assess their contribution to skilled object manipulation. By identifying which areas of the brain control certain motor skills, Drs. Gallivan and Flanagan’s research will be helpful in assessing patients with neurological impairments including stroke.

The work was funded by the Canadian Institutes of Health Research (CIHR). The research was recently published in Current Biology.

Filed under visual cortex transcranial magnetic stimulation object weight occipitotemporal cortex neuroscience science

92 notes

Researchers Uncover an Unexpected Role for Endostatin in the Nervous System

Researchers at UC San Francisco have discovered that endostatin, a protein that once aroused intense interest as a possible cancer treatment, plays a key role in the stable functioning of the nervous system.

A substance that occurs naturally in the body, endostatin potently blocks the formation of new blood vessels. In studies in mice in the late 1990s, endostatin treatment virtually eliminated cancer by shutting down the blood supply to tumors, but subsequent human clinical trials proved disappointing.

“It was a very big surprise” to find that endostatin, through some other mechanism, helps to maintain the proper workings of synapses, the sites where communication between nerve cells takes place, said Graeme W. Davis, PhD, Hertzstein Distinguished Professor of Medicine in the Department of Biochemistry and Biophysics at UCSF and senior author of the new study. “Endostatin was not on our radar.”

The findings were reported online July 24 in the journal Neuron.

Synapses are continually shaped and reshaped by experience, a phenomenon known as plasticity. But for those changes to be meaningful, said Davis, they must take place against a stable background, which paradoxically requires another form of change that he and colleagues call “homeostatic plasticity.” Just as we change our pace, slowing down or speeding up, to keep abreast of a running partner, neurons adjust aspects of their function at synapses to compensate for changes in their synaptic partners brought on by aging, illness, or other factors.

In an example of homeostatic plasticity, in the neuromuscular disease myasthenia gravis, as muscle cells become less responsive to the neurotransmitter acetylcholine, nerve cells ramp up their secretion of the neurotransmitter to keep the system in balance for as long as possible. Some researchers believe that in other disorders, including autism and schizophrenia, a failure in such homeostatic mechanisms keeps synapses from functioning properly.

In previous research Davis noticed that applying a toxin to a muscle cell in the fruit fly Drosophila melanogaster triggers homeostatic plasticity in the neuron that forms a synapse on that muscle cell: the neuron—which is called presynaptic, because it is “before” the synapse with the muscle cell—reliably releases more neurotransmitter, just as happens when muscle cells begin to malfunction in myasthenia gravis.

Davis has since built on this model of homeostatic plasticity by painstakingly knocking out Drosophila genes one by one and recording from presynaptic neurons to see which genes are necessary for the homeostatic response, because it is these genes that may be compromised in diseases affecting the process.

“So far we’ve tested about 1,000 genes this way, which has entailed close to 10,000 recordings,” Davis said.

Using this technique Davis and colleagues observed at one point that knocking out a gene called multiplexin significantly hampered homeostatic plasticity in presynaptic neurons. But because that gene helps to form a structural protein known as collagen—which in humans is a component of ligaments, tendons, and cartilage—the finding wasn’t immediately considered relevant to synaptic function.

The team learned that the multiplexin protein can be snipped by an enzyme to produce endostatin, so in experiments led by postdoctoral fellow Tingting Wang, PhD, they tested whether endostatin might play a role in homeostatic plasticity.

“Nobody picked up multiplexin to work on for a couple of years, because we didn’t think a collagen could be that interesting,” Davis said. “Then, when a new postdoc, Tingting Wang, came to the lab, we started thinking about it harder.”

When the group genetically deleted the portion of Drosophila multiplexin that forms endostatin, presynaptic neurons behaved normally, but homeostatic plasticity was severely compromised when toxin was applied to postsynaptic muscle cells. On the opposite side of the coin, when the team overexpressed endostatin at Drosophila synapses lacking multiplexin, homeostasis was restored, whether endostatin was expressed in muscle cells or presynaptic neurons.

The research team is unsure precisely how and where endostatin exerts its effects on homeostatic plasticity, but they believe that multiplexin is cleaved at the postsynaptic site to form endostatin, and that the endostatin signal is conveyed to the presynaptic neuron to alter its function. “Because so many people in the cancer world have studied endostatin, there is a great set of tools available” to study the protein, Davis said, so he expects his group to make rapid progress in addressing these questions.

“Despite its checkered history in cancer, we know endostatin is a signaling molecule and we know that the brain has a great deal of collagen—we just haven’t known what it does, and we certainly don’t know what endostatin’s receptors in the brain might be.” Davis said. “But it’s pretty exciting to think about a new signaling molecule with a profound role in the stabilization of the function of neural circuits.”

(Source: ucsf.edu)

Filed under endostatin multiplexin homeostatic plasticity nervous system neuroscience science

free counters