Neuroscience

Articles and news from the latest research reports.

Posts tagged science

325 notes

Caffeine affects boys and girls differently after puberty
Caffeine intake by children and adolescents has been rising for decades, due in large part to the popularity of caffeinated sodas and energy drinks, which now are marketed to children as young as four. Despite this, there is little research on the effects of caffeine on young people.
One researcher who is conducting such investigations is Jennifer Temple, PhD, associate professor in the Department of Exercise and Nutrition Sciences, University at Buffalo School of Public Health and Health Professions.
Her new study finds that after puberty, boys and girls experience different heart rate and blood pressure changes after consuming caffeine. Girls also experience some differences in caffeine effect during their menstrual cycles.
The study, “Cardiovascular Responses to Caffeine by Gender and Pubertal Stage,” will be published online June 16 in the July 2014 edition of the journal Pediatrics.
Past studies, including those by this research team, have shown that caffeine increases blood pressure and decreases heart rate in children, teens and adults, including pre-adolescent boys and girls. The purpose here was to learn whether gender differences in cardiovascular responses to caffeine emerge after puberty and if those responses differ across phases of the menstrual cycle.
Temple says, “We found an interaction between gender and caffeine dose, with boys having a greater response to caffeine than girls, as well as interactions between pubertal phase, gender and caffeine dose, with gender differences present in post-pubertal, but not in pre-pubertal, participants.
“Finally,” she says, “we found differences in responses to caffeine across the menstrual cycle in post-pubertal girls, with decreases in heart rate that were greater in the mid-luteal phase and blood pressure increases that were greater in the mid-follicular phase of the menstrual cycle.
“In this study, we were looking exclusively into the physical results of caffeine ingestion,” she says.
Phases of the menstrual cycle, marked by changing levels of hormones, are the follicular phase, which begins on the first day of menstruation and ends with ovulation, and the luteal phase, which follows ovulation and is marked by significantly higher levels of progesterone than the previous phase.
Future research in this area will determine the extent to which gender differences are mediated by physiological factors such as steroid hormone level or by differences in patterns of caffeine use, caffeine use by peers or more autonomy and control over beverage purchases, Temple says.
This double-blind, placebo-controlled, dose-response study was funded by a grant from the National Institute on Drug Abuse of the National Institutes of Health. 
It examined heart rate and blood pressure before and after administration of placebo and two doses of caffeine (1 and 2 mg/kg) in pre-pubertal (8- to 9-year-old; n = 52) and post-pubertal (15- to 17-year-old; n = 49) boys (n = 54) and girls (n = 47).

Caffeine affects boys and girls differently after puberty

Caffeine intake by children and adolescents has been rising for decades, due in large part to the popularity of caffeinated sodas and energy drinks, which now are marketed to children as young as four. Despite this, there is little research on the effects of caffeine on young people.

One researcher who is conducting such investigations is Jennifer Temple, PhD, associate professor in the Department of Exercise and Nutrition Sciences, University at Buffalo School of Public Health and Health Professions.

Her new study finds that after puberty, boys and girls experience different heart rate and blood pressure changes after consuming caffeine. Girls also experience some differences in caffeine effect during their menstrual cycles.

The study, “Cardiovascular Responses to Caffeine by Gender and Pubertal Stage,” will be published online June 16 in the July 2014 edition of the journal Pediatrics.

Past studies, including those by this research team, have shown that caffeine increases blood pressure and decreases heart rate in children, teens and adults, including pre-adolescent boys and girls. The purpose here was to learn whether gender differences in cardiovascular responses to caffeine emerge after puberty and if those responses differ across phases of the menstrual cycle.

Temple says, “We found an interaction between gender and caffeine dose, with boys having a greater response to caffeine than girls, as well as interactions between pubertal phase, gender and caffeine dose, with gender differences present in post-pubertal, but not in pre-pubertal, participants.

“Finally,” she says, “we found differences in responses to caffeine across the menstrual cycle in post-pubertal girls, with decreases in heart rate that were greater in the mid-luteal phase and blood pressure increases that were greater in the mid-follicular phase of the menstrual cycle.

“In this study, we were looking exclusively into the physical results of caffeine ingestion,” she says.

Phases of the menstrual cycle, marked by changing levels of hormones, are the follicular phase, which begins on the first day of menstruation and ends with ovulation, and the luteal phase, which follows ovulation and is marked by significantly higher levels of progesterone than the previous phase.

Future research in this area will determine the extent to which gender differences are mediated by physiological factors such as steroid hormone level or by differences in patterns of caffeine use, caffeine use by peers or more autonomy and control over beverage purchases, Temple says.

This double-blind, placebo-controlled, dose-response study was funded by a grant from the National Institute on Drug Abuse of the National Institutes of Health. 

It examined heart rate and blood pressure before and after administration of placebo and two doses of caffeine (1 and 2 mg/kg) in pre-pubertal (8- to 9-year-old; n = 52) and post-pubertal (15- to 17-year-old; n = 49) boys (n = 54) and girls (n = 47).

Filed under caffeine puberty blood pressure adolescents sex differences neuroscience science

159 notes

Proteins causing daytime sleepiness tied to bone formation, providing target for osteoporosis

Orexin proteins, which are blamed for spontaneous daytime sleepiness, also play a crucial role in bone formation, according to findings by UT Southwestern Medical Center researchers. The findings could potentially give rise to new treatments for osteoporosis, the researchers say.

image

Orexins are a type of protein used by nerve cells to communicate with each other. Since their discovery at UT Southwestern more than 15 years ago, they have been found to regulate a number of behaviors, including arousal, appetite, reward, energy expenditure, and wakefulness. Orexin deficiency, for example, causes narcolepsy – spontaneous daytime sleepiness. Thus, orexin antagonists are promising treatments for insomnia, some of which have been tested in Phase III clinical trials.

UT Southwestern researchers, working with colleagues in Japan, now have found that mice lacking orexins also have very thin and fragile bones that break easily because they have fewer cells called osteoblasts, which are responsible for building bones.

“Osteoporosis is highly prevalent, especially among post-menopausal women. We are hoping that we might be able to take advantage of the already available orexin-targeting small molecules to potentially treat osteoporosis,” said Dr. Yihong Wan, Assistant Professor of Pharmacology, the Virginia Murchison Linthicum Scholar in Medical Research, and senior author for the study, published in the journal Cell Metabolism.

Osteoporosis, the most common type of bone disease in which bones become fragile and susceptible to fracture, affects more than 10 million Americans. The disease, which disproportionately affects seniors and women, leads to more than 1.5 million fractures and some 40,000 deaths annually. In addition, the negative effects impact productivity, mental health, and quality of life. One in five people with hip fractures, for example, end up in nursing homes.

Orexins seem to play a dual role in the process: they both promote and block bone formation. On the bones themselves, orexins interact with another protein, orexin receptor 1 (OX1R), which decreases the levels of the hunger hormone ghrelin. This slows down the production of new osteoblasts and, therefore, blocks bone formation locally. At the same time, orexins interact with orexin receptor 2 (OX2R) in the brain. In this case, the interaction reduces the circulating levels of leptin, a hormone known to decrease bone mass, and thereby promotes bone formation. Therefore, osteoporosis prevention and treatment may be achieved by either inhibiting OX1R or activating OX2R.

“We were very intrigued by this yin-yang-style dual regulation,” said Dr. Wan, a member of the Cecil H. and Ida Green Center for Reproductive Biology Sciences and UT Southwestern’s Harold C. Simmons Comprehensive Cancer Center. “It is remarkable that orexins manage to regulate bone formation by using two different receptors located in two different tissues.”

The central nervous system regulation through OX2R, and therefore promotion of bone formation, was actually dominant over regulation through OX1R. So when the group examined mice lacking both OX1R and OX2R, they had very fragile bones with decreased bone formation. Similarly, when they assessed mice that expressed high levels of orexins, those mice had increased numbers of osteoblasts and enhanced bone formation.

(Source: utsouthwestern.edu)

Filed under orexin osteoporosis narcolepsy osteoblasts ghrelin bone formation neuroscience science

184 notes

Rescue of Alzheimer’s Memory Deficit Achieved by Reducing ‘Excessive Inhibition’
A new drug target to fight Alzheimer’s disease has been discovered by a research team led by Gong Chen, a Professor of Biology and the Verne M. Willaman Chair in Life Sciences at Penn State University. The discovery also has potential for development as a novel diagnostic tool for Alzheimer’s disease, which is the most common form of dementia and one for which no cure has yet been found. A scientific paper describing the discovery will be published in Nature Communications on 13 June 2014. 
Chen’s research was motivated by the recent failure in clinical trials of once-promising Alzheimer’s drugs being developed by large pharmaceutical companies. “Billions of dollars were invested in years of research leading up to the clinical trials of those Alzheimer’s drugs, but they failed the test after they unexpectedly worsened the patients’ symptoms,” Chen said. The research behind those drugs had targeted the long-recognized feature of Alzheimer’s brains: the sticky buildup of the amyloid protein known as plaques, which can cause neurons in the brain to die. “The research of our lab and others now has focused on finding new drug targets and on developing new approaches for diagnosing and treating Alzheimer’s disease,” Chen explained.
"We recently discovered an abnormally high concentration of one inhibitory neurotransmitter in the brains of deceased Alzheimer’s patients," Chen said. He and his research team found the neurotransmitter, called GABA (gamma-aminobutyric acid), in deformed cells called "reactive astrocytes" in a structure in the core of the brain called the dentate gyrus. This structure is the gateway to hippocampus, an area of the brain that is critical for learning and memory.  
Chen’s team found that the GABA neurotransmitter was drastically increased in the deformed versions of the normally large, star-shaped “astrocyte” cells which, in a healthy individual, surround and support individual neurons in the brain. “Our research shows that the excessively high concentration of the GABA neurotransmitter in these reactive astrocytes is a novel biomarker that we hope can be targeted in further research as a tool for the diagnosis and treatment of Alzheimer’s disease,” Chen said. 
Chen’s team developed new analysis methods to evaluate neurotransmitter concentrations in the brains of normal and genetically modified mouse models for Alzheimer’s disease (AD mice). “Our studies of AD mice showed that the high concentration of the GABA neurotransmitter in the reactive astrocytes of the dentate gyrus correlates with the animals’ poor performance on tests of learning and memory,” Chen said. His lab also found that the high concentration of the GABA neurotransmitter in the reactive astrocytes is released through an astrocyte-specific GABA transporter, a novel drug target found in this study, to enhance GABA inhibition in the dentate gyrus. With too much inhibitory GABA neurotransmitter, the neurons in the dentate gyrus are not fired up like they normally would be when a healthy person is learning something new or remembering something already learned.
Importantly, Chen said, “After we inhibited the astrocytic GABA transporter to reduce GABA inhibition in the brains of the AD mice, we found that they showed better memory capability than the control AD mice. We are very excited and encouraged by this result because it might explain why previous clinical trials failed by targeting amyloid plaques alone. One possible explanation is that while amyloid plaques may be reduced by targeting amyloid proteins, the other downstream alterations triggered by amyloid deposits, such as the excessive GABA inhibition discovered in our study, cannot be corrected by targeting amyloid proteins alone. Our studies suggest that reducing the excessive GABA inhibition to the neurons in the brain’s dentate gyrus may lead to a novel therapy for Alzheimer’s disease. An ultimate successful therapy may be a cocktail of compounds acting on several drug targets simultaneously.”

Rescue of Alzheimer’s Memory Deficit Achieved by Reducing ‘Excessive Inhibition’

A new drug target to fight Alzheimer’s disease has been discovered by a research team led by Gong Chen, a Professor of Biology and the Verne M. Willaman Chair in Life Sciences at Penn State University. The discovery also has potential for development as a novel diagnostic tool for Alzheimer’s disease, which is the most common form of dementia and one for which no cure has yet been found. A scientific paper describing the discovery will be published in Nature Communications on 13 June 2014. 

Chen’s research was motivated by the recent failure in clinical trials of once-promising Alzheimer’s drugs being developed by large pharmaceutical companies. “Billions of dollars were invested in years of research leading up to the clinical trials of those Alzheimer’s drugs, but they failed the test after they unexpectedly worsened the patients’ symptoms,” Chen said. The research behind those drugs had targeted the long-recognized feature of Alzheimer’s brains: the sticky buildup of the amyloid protein known as plaques, which can cause neurons in the brain to die. “The research of our lab and others now has focused on finding new drug targets and on developing new approaches for diagnosing and treating Alzheimer’s disease,” Chen explained.

"We recently discovered an abnormally high concentration of one inhibitory neurotransmitter in the brains of deceased Alzheimer’s patients," Chen said. He and his research team found the neurotransmitter, called GABA (gamma-aminobutyric acid), in deformed cells called "reactive astrocytes" in a structure in the core of the brain called the dentate gyrus. This structure is the gateway to hippocampus, an area of the brain that is critical for learning and memory.  

Chen’s team found that the GABA neurotransmitter was drastically increased in the deformed versions of the normally large, star-shaped “astrocyte” cells which, in a healthy individual, surround and support individual neurons in the brain. “Our research shows that the excessively high concentration of the GABA neurotransmitter in these reactive astrocytes is a novel biomarker that we hope can be targeted in further research as a tool for the diagnosis and treatment of Alzheimer’s disease,” Chen said. 

Chen’s team developed new analysis methods to evaluate neurotransmitter concentrations in the brains of normal and genetically modified mouse models for Alzheimer’s disease (AD mice). “Our studies of AD mice showed that the high concentration of the GABA neurotransmitter in the reactive astrocytes of the dentate gyrus correlates with the animals’ poor performance on tests of learning and memory,” Chen said. His lab also found that the high concentration of the GABA neurotransmitter in the reactive astrocytes is released through an astrocyte-specific GABA transporter, a novel drug target found in this study, to enhance GABA inhibition in the dentate gyrus. With too much inhibitory GABA neurotransmitter, the neurons in the dentate gyrus are not fired up like they normally would be when a healthy person is learning something new or remembering something already learned.

Importantly, Chen said, “After we inhibited the astrocytic GABA transporter to reduce GABA inhibition in the brains of the AD mice, we found that they showed better memory capability than the control AD mice. We are very excited and encouraged by this result because it might explain why previous clinical trials failed by targeting amyloid plaques alone. One possible explanation is that while amyloid plaques may be reduced by targeting amyloid proteins, the other downstream alterations triggered by amyloid deposits, such as the excessive GABA inhibition discovered in our study, cannot be corrected by targeting amyloid proteins alone. Our studies suggest that reducing the excessive GABA inhibition to the neurons in the brain’s dentate gyrus may lead to a novel therapy for Alzheimer’s disease. An ultimate successful therapy may be a cocktail of compounds acting on several drug targets simultaneously.”

Filed under alzheimer's disease astrocytes GABA hippocampus dentate gyrus neuroscience science

273 notes

Fungal protein found to cross blood-brain barrier

In a remarkable series of experiments on a fungus that causes cryptococcal meningitis, a deadly infection of the membranes that cover the spinal cord and brain, investigators at UC Davis have isolated a protein that appears to be responsible for the fungus’ ability to cross from the bloodstream into the brain.

image

The discovery — published online June 3 in mBio, the open-access, peer-reviewed journal of the American Society for Microbiology — has important implications for developing a more effective treatment for Cryptococcus neoformans, the cause of the condition, and other brain infections, as well as for brain cancers that are difficult to treat with conventional medications. 

“This study fills a significant gap in our understanding of how C. neoformans crosses the blood-brain barrier and causes meningitis,” said Angie Gelli, associate professor of pharmacology at UC Davis and principal investigator of the study. “It is our hope that our findings will lead to improved treatment for this fungal disease as well as other diseases of the central nervous system.”

Normally the brain is protected from bacterial, viral and fungal pathogens in the bloodstream by a tightly packed layer of endothelial cells lining capillaries within the central nervous system — the so-called blood-brain barrier. Relatively few organisms — and drugs that could fight brain infections or cancers — can breach this protective barrier.

The fungus studied in this research causes cryptococcal meningoencephalitis, a usually fatal brain infection that annually affects some 1 million people worldwide, most often those with an impaired immune system. People typically first develop an infection in the lungs after inhalation of the fungal spores of C. neoformans in soil or pigeon droppings. The pathogen then spreads to the brain and other organs.

Unique protein identified

In an effort to discover how C. neoformans breaches the blood-brain barrier, the investigators isolated candidate proteins from the cryptococcal cell surface. One was a previously uncharacterized metalloprotease that they named Mpr1. (A protease is an enzyme — a specialized protein — that promotes a chemical reaction; a metalloprotease contains a metal ion — in this case zinc — that is essential for its activity.) The M36 class of metalloproteases to which Mpr1 belongs is unique to fungi and does not occur in mammalian cells.

The investigators next artificially generated a strain of C. neoformans that lacked Mpr1 on the cell surface. Unlike the normal wild-type C. neoformans, the strain without Mpr1 could not cross an artificial model of the human blood-brain barrier.

They next took a strain of common baking yeast — Saccharomyces cerevisiae — that does not cross the blood-brain barrier and does not normally express Mpr1, and modified it to express Mpr1 on its cell surface. This strain then gained the ability to cross the blood-brain barrier model.

Investigators then infected mice with either the C. neoformans that lacked Mpr1 or the wild-type strain by injecting the organisms into their bloodstream. Comparing the brain pathology of mice 48 hours later, they found numerous cryptococci-filled cysts throughout the brain tissue of mice infected with the wild-type strain; these lesions were undetectable in those infected with the strain lacking Mpr1. In another experiment, after 37 days of being infected by the inhalation route, 85 percent of the mice exposed to the wild-type C. neoformans had died, while all of those given the fungus without Mpr1 were alive.

“Our studies are the first clear demonstration of a specific role for a fungal protease in invading the central nervous system,” said Gelli. “The details of exactly how it crosses is an important new area under investigation.”

New targeted therapies possible

According to Gelli, their discovery has significant therapeutic potential via two important mechanisms. Either Mpr1 — or an aspect of the mechanism by which it crosses the blood-brain barrier — could be a target of new drugs for treating meningitis caused by C. neoformans. In a person who develops cryptococcal lung infection, such a treatment would ideally make the fungus less likely to enter the brain and lead to a rapidly fatal meningitis.

Secondly, Mpr1 could be developed as part of a drug-delivery vehicle for brain infections and cancers. An antibiotic or cancer-fighting drug that is unable to cross the blood-brain barrier on its own could be attached to a nanoparticle containing Mpr1, allowing it to hitch a ride and deliver its goods to where it is needed.

“The biggest obstacle to treating many brain cancers and infections is getting good drugs through the blood-brain barrier,” said Gelli. “If we could design an effective delivery system into the brain, the impact would be enormous for treating some of these terrible diseases.”

Gelli’s group is currently pursuing such a nanoparticle drug-delivery system using Mpr1. They are also further investigating the exact molecular mechanism by which Mpr1 breaches the blood-brain barrier.

(Source: ucdmc.ucdavis.edu)

Filed under blood brain barrier meningitis CNS drug delivery Mpr1 medicine science

75 notes

Study Describes New Models for Testing Parkinson’s Disease Immune-based Drugs
Using powerful, newly developed cell culture and mouse models of sporadic Parkinson’s disease (PD), a team of researchers from the Perelman School of Medicine at the University of Pennsylvania, has demonstrated that immunotherapy with specifically targeted antibodies may block the development and spread of PD pathology in the brain. By intercepting the distorted and misfolded alpha-synuclein (α-syn) proteins that enter and propagate in neurons, creating aggregates, the researchers prevented the development of pathology and also reversed some of the effects of already-existing disease. The α-syn clumps, called Lewy bodies, eventually kill affected neurons, which leads to clinical PD. Their work appears this week in Cell Reports.
Earlier studies by senior author Virginia M.Y. Lee, PhD, and her colleagues at Penn’s Center for Neurodegenerative Disease Research (CNDR) had demonstrated a novel pathology of PD in which misfolded α-syn fibrils initiate and propagate Lewy bodies via cell-to-cell transmission. This was accomplished using synthetically created α-syn fibrils that allowed them to observe how Parkinson’s pathology developed and spread in a mouse and in neurons in a dish. The present study is a proof-of-concept of how these models might be used to develop new PD therapies.
"Once we created these models, the first thing that came to mind is immunotherapy," says Lee, CNDR director and professor of Pathology and Laboratory Medicine. "If you can develop antibodies that would stop the spreading, you may have a way to at least retard the progression of PD." The current work, she explains, uses antibodies that were generated and characterized at CNDR previously to see if they would reduce the pathology both in cell culture and in animal models.
Lee’s team focused on anti-α-syn monoclonal antibodies (MAbs). “In animal models,” Lee explains, “the question we want to ask is, can we reduce the pathology and also rescue cell loss to improve the behavioral deficits?”
Using their previously established sporadic PD mouse model, the researchers conducted both prevention and intervention preclinical studies. For prevention studies, they injected mouse α-syn synthetic preformed fibrils into wild-type, normal mice, as a control, and then immediately treated the mice with Syn303, one of the MAbs used (or IgG, another type of common antibody, for the control mice).
The control group without MAb administration showed PD pathology in multiple brain areas over time, while the mice treated with Syn303 showed significantly reduced pathology in the same areas. For intervention studies, they treated PD mice with Syn303 several days after fibril injections when Lewy bodies were already present. They found that the progression of pathology was markedly reduced in the Syn303-treated mice versus mice that did not receive Syn303.
"But there are some limitations to experiments in live mice since it is difficult to directly study the mechanism of how it works," Lee says. "To do that, we went back to the cell culture model to ask whether or not the antibody basically prevents the uptake of misfolded α-syn." The cell culture experiments showed that MAbs prevented the uptake of misfolded α-syn fibrils by neurons and sharply reduced the recruitment of natural α-syn into new Lewy body aggregates. 
Next steps for the team will be to refine the immunotherapeutic approach. “We need to make better antibodies that have high affinity for pathology and not the normal protein,” says Lee.
The team’s models also open up new opportunities for studying and treating PD. “The system really allows us to identify new targets for treating PD,” Lee says. “The cell model could be a platform to look for small molecular drugs that would inhibit pathology.” Their approach could also serve as a foundation for genetically based studies to identify specific genes involved in PD pathology. 
“Hopefully more people will use the model to look for new targets or screen for treatments for PD. That would be terrific,” concludes Lee.

Study Describes New Models for Testing Parkinson’s Disease Immune-based Drugs

Using powerful, newly developed cell culture and mouse models of sporadic Parkinson’s disease (PD), a team of researchers from the Perelman School of Medicine at the University of Pennsylvania, has demonstrated that immunotherapy with specifically targeted antibodies may block the development and spread of PD pathology in the brain. By intercepting the distorted and misfolded alpha-synuclein (α-syn) proteins that enter and propagate in neurons, creating aggregates, the researchers prevented the development of pathology and also reversed some of the effects of already-existing disease. The α-syn clumps, called Lewy bodies, eventually kill affected neurons, which leads to clinical PD. Their work appears this week in Cell Reports.

Earlier studies by senior author Virginia M.Y. Lee, PhD, and her colleagues at Penn’s Center for Neurodegenerative Disease Research (CNDR) had demonstrated a novel pathology of PD in which misfolded α-syn fibrils initiate and propagate Lewy bodies via cell-to-cell transmission. This was accomplished using synthetically created α-syn fibrils that allowed them to observe how Parkinson’s pathology developed and spread in a mouse and in neurons in a dish. The present study is a proof-of-concept of how these models might be used to develop new PD therapies.

"Once we created these models, the first thing that came to mind is immunotherapy," says Lee, CNDR director and professor of Pathology and Laboratory Medicine. "If you can develop antibodies that would stop the spreading, you may have a way to at least retard the progression of PD." The current work, she explains, uses antibodies that were generated and characterized at CNDR previously to see if they would reduce the pathology both in cell culture and in animal models.

Lee’s team focused on anti-α-syn monoclonal antibodies (MAbs). “In animal models,” Lee explains, “the question we want to ask is, can we reduce the pathology and also rescue cell loss to improve the behavioral deficits?”

Using their previously established sporadic PD mouse model, the researchers conducted both prevention and intervention preclinical studies. For prevention studies, they injected mouse α-syn synthetic preformed fibrils into wild-type, normal mice, as a control, and then immediately treated the mice with Syn303, one of the MAbs used (or IgG, another type of common antibody, for the control mice).

The control group without MAb administration showed PD pathology in multiple brain areas over time, while the mice treated with Syn303 showed significantly reduced pathology in the same areas. For intervention studies, they treated PD mice with Syn303 several days after fibril injections when Lewy bodies were already present. They found that the progression of pathology was markedly reduced in the Syn303-treated mice versus mice that did not receive Syn303.

"But there are some limitations to experiments in live mice since it is difficult to directly study the mechanism of how it works," Lee says. "To do that, we went back to the cell culture model to ask whether or not the antibody basically prevents the uptake of misfolded α-syn." The cell culture experiments showed that MAbs prevented the uptake of misfolded α-syn fibrils by neurons and sharply reduced the recruitment of natural α-syn into new Lewy body aggregates. 

Next steps for the team will be to refine the immunotherapeutic approach. “We need to make better antibodies that have high affinity for pathology and not the normal protein,” says Lee.

The team’s models also open up new opportunities for studying and treating PD. “The system really allows us to identify new targets for treating PD,” Lee says. “The cell model could be a platform to look for small molecular drugs that would inhibit pathology.” Their approach could also serve as a foundation for genetically based studies to identify specific genes involved in PD pathology. 

“Hopefully more people will use the model to look for new targets or screen for treatments for PD. That would be terrific,” concludes Lee.

Filed under parkinson's disease lewy bodies alpha synuclein antibodies neuroscience science

103 notes

Scientists take totally tubular journey through brain cells

In a new study, scientists at the National Institutes of Health took a molecular-level journey into microtubules, the hollow cylinders inside brain cells that act as skeletons and internal highways. They watched how a protein called tubulin acetyltransferase (TAT) labels the inside of microtubules. The results, published in Cell, answer long-standing questions about how TAT tagging works and offer clues as to why it is important for brain health.

image

(Image caption: NIH scientists watched the inside of brain cell tubes, called microtubules, get tagged by a protein called TAT. Tagging is a critical process in the health and development of nerve cells. Credit: Courtesy of the Roll-Mecak lab, NINDS, Bethesda, MD)

Microtubules are constantly tagged by proteins in the cell to designate them for specialized functions, in the same way that roads are labeled for fast or slow traffic or for maintenance. TAT coats specific locations inside the microtubules with a chemical called an acetyl group. How the various labels are added to the cellular microtubule network remains a mystery. Recent findings suggested that problems with tagging microtubules may lead to some forms of cancer and nervous system disorders, including Alzheimer’s disease, and have been linked to a rare blinding disorder and Joubert Syndrome, an uncommon brain development disorder.

“This is the first time anyone has been able to peer inside microtubules and catch TAT in action,” said Antonina Roll-Mecak, Ph.D., an investigator at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS), Bethesda, Maryland, and the leader of the study.

Microtubules are found in all of the body’s cells. They are assembled like building blocks, using a protein called tubulin. Microtubules are constructed first by aligning tubulin building blocks into long strings. Then the strings align themselves side by side to form a sheet. Eventually the sheet grows wide enough that it closes up into a cylinder. TAT then bonds an acetyl group to alpha tubulin, a subunit of the tubulin protein.

Some microtubules are short-lived and can rapidly change lengths by adding or removing tubulin pieces along one end, whereas others remain unchanged for longer times. Recognizing the difference may help cells function properly. For example, cells may send cargo along stable microtubules and avoid ones that are being rebuilt. Cells appear to use a variety of chemical labels to describe the stability of microtubules.

“Our study uncovers how TAT may help cells distinguish between stable microtubules and ones that are under construction,” said Dr. Roll-Mecak. According to Dr. Roll-Mecak, high levels of microtubule tagging are unique to nerve cells and may be the reason that they have complex shapes allowing them to make elaborate connections in the brain.

For decades scientists knew that the insides of long-lived microtubules were often tagged with acetyl groups by TAT. Changes in acetylation may influence the health of nerve cells. Some studies have shown that blocking this form of microtubule tagging leads to nerve defects, brain abnormalities or degeneration of nerve fibers. Since the discovery of microtubule acetylation, scientists have been puzzled about how TAT accesses the inside of the microtubules and how the tagging reaction happens.

To watch TAT at work, Dr. Roll-Mecak and her colleagues took high resolution movies of individual TAT molecules interacting with microtubules in real time. They saw that TAT surfs through the inside of microtubules and although it can find acetylation sites quickly, the process of adding the tag occurs very slowly.

In general, tagging reactions work like keys fitting into locks: the better the key fits, the faster the lock can open. Similarly, the rate of the reactions is determined by how well TAT molecules fit around tagging sites. 

Dr. Roll-Mecak’s team investigated this idea by using a technique called X-ray crystallography to look at how atoms on TAT molecules interact with acetylation sites on tubulin molecules. Their results suggested that TAT fit poorly around the sites. 

“It looks as though TAT can easily journey through microtubules spotting acetylation sites but may only label those that are stable for longer periods of time,” said Dr. Roll-Mecak.

This may help cells identify the microtubules they need to rapidly change shapes or send cargo to other places. Further studies may help researchers understand how microtubule tagging influences nerve cells in health and disease.

(Source: ninds.nih.gov)

Filed under brain cells microtubules x-ray crystallography tubulin acetyltransferase neuroscience science

980 notes

When good people do bad things
When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.
“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”
Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.
Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.
In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.
“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.
Group dynamics
Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”
The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.
“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”
Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.
A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”
When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”
The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.
Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.
“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”
Getting lost
The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.
“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.
Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

Filed under prefrontal cortex social cognition intergroup competition psychology neuroscience science

92 notes

Findings point toward one of first therapies for Lou Gehrig’s disease

Researchers have determined that a copper compound known for decades may form the basis for a therapy for amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease.

In a new study just published in the Journal of Neuroscience, scientists from Australia, the United States (Oregon), and the United Kingdom showed in laboratory animal tests that oral intake of this compound significantly extended the lifespan and improved the locomotor function of transgenic mice that are genetically engineered to develop this debilitating and terminal disease.

In humans, no therapy for ALS has ever been discovered that could extend lifespan more than a few additional months. Researchers in the Linus Pauling Institute at Oregon State University say this approach has the potential to change that, and may have value against Parkinson’s disease as well.

“We believe that with further improvements, and following necessary human clinical trials for safety and efficacy, this could provide a valuable new therapy for ALS and perhaps Parkinson’s disease,” said Joseph Beckman, a distinguished professor of biochemistry and biophysics in the OSU College of Science.

“I’m very optimistic,” said Beckman, who received the 2012 Discovery Award from the OHSU Medical Research Foundation as the leading medical researcher in Oregon.

ALS was first identified as a progressive and fatal neurodegenerative disease in the late 1800s and gained international recognition in 1939 when it was diagnosed in American baseball legend Lou Gehrig. It’s known to be caused by motor neurons in the spinal cord deteriorating and dying, and has been traced to mutations in copper, zinc superoxide dismutase, or SOD1. Ordinarily, superoxide dismutase is an antioxidant whose proper function is essential to life.

When SOD1 is lacking its metal co-factors, it “unfolds” and becomes toxic, leading to the death of motor neurons. The metals copper and zinc are important in stabilizing this protein, and can help it remain folded more than 200 years.

“The damage from ALS is happening primarily in the spinal cord and that’s also one of the most difficult places in the body to absorb copper,” Beckman said. “Copper itself is necessary but can be toxic, so its levels are tightly controlled in the body. The therapy we’re working toward delivers copper selectively into the cells in the spinal cord that actually need it. Otherwise, the compound keeps copper inert.”

“This is a safe way to deliver a micronutrient like copper exactly where it is needed,” Beckman said.

By restoring a proper balance of copper into the brain and spinal cord, scientists believe they are stabilizing the superoxide dismutase in its mature form, while improving the function of mitochondria. This has already extended the lifespan of affected mice by 26 percent, and with continued research the scientists hope to achieve even more extension.

The compound that does this is called copper (ATSM), has been studied for use in some cancer treatments, and is relatively inexpensive to produce.

“In this case, the result was just the opposite of what one might have expected,” said Blaine Roberts, lead author on the study and a research fellow at the University of Melbourne, who received his doctorate at OSU working with Beckman.

“The treatment increased the amount of mutant SOD, and by accepted dogma this means the animals should get worse,” he said. “But in this case, they got a lot better. This is because we’re making a targeted delivery of copper just to the cells that need it.

“This study opens up a previously neglected avenue for new disease therapies, for ALS and other neurodegenerative disease,” Roberts said.

(Source: oregonstate.edu)

Filed under ALS Lou Gehrig’s disease copper SOD1 motor neurons neuroscience science

293 notes

Synchronized brain waves enable rapid learning
The human mind can rapidly absorb and analyze new information as it flits from thought to thought. These quickly changing brain states may be encoded by synchronization of brain waves across different brain regions, according to a new study from MIT neuroscientists.
The researchers found that as monkeys learn to categorize different patterns of dots, two brain areas involved in learning — the prefrontal cortex and the striatum — synchronize their brain waves to form new communication circuits.
“We’re seeing direct evidence for the interactions between these two systems during learning, which hasn’t been seen before. Category-learning results in new functional circuits between these two areas, and these functional circuits are rhythm-based, which is key because that’s a relatively new concept in systems neuroscience,” says Earl Miller, the Picower Professor of Neuroscience at MIT and senior author of the study, which appears in the June 12 issue of Neuron.
There are millions of neurons in the brain, each producing its own electrical signals. These combined signals generate oscillations known as brain waves, which can be measured by electroencephalography (EEG). The research team focused on EEG patterns from the prefrontal cortex —the seat of the brain’s executive control system — and the striatum, which controls habit formation.
The phenomenon of brain-wave synchronization likely precedes the changes in synapses, or connections between neurons, believed to underlie learning and long-term memory formation, Miller says. That process, known as synaptic plasticity, is too time-consuming to account for the human mind’s flexibility, he believes.
“If you can change your thoughts from moment to moment, you can’t be doing it by constantly making new connections and breaking them apart in your brain. Plasticity doesn’t happen on that kind of time scale,” says Miller, who is a member of MIT’s Picower Institute for Learning and Memory. “There’s got to be some way of dynamically establishing circuits to correspond to the thoughts we’re having in this moment, and then if we change our minds a moment later, those circuits break apart somehow. We think synchronized brain waves may be the way the brain does it.”
The paper’s lead author is former Picower Institute postdoc Evan Antzoulatos, who is now at the University of California at Davis.
Humming together
Miller’s lab has previously shown that during category-learning, neurons in the striatum become active early, followed by slower activation of neurons in the prefrontal cortex. “The striatum learns very simple things really quickly, and then its output trains the prefrontal cortex to gradually pick up on the bigger picture,” Miller says. “The striatum learns the pieces of the puzzle, and then the prefrontal cortex puts the pieces of the puzzle together.”
In the new study, the researchers wanted to investigate whether this activity pattern actually reflects communication between the prefrontal cortex and striatum, or if each region is working independently. To do this, they measured EEG signals as monkeys learned to assign patterns of dots into one of two categories.
At first, the animals were shown just two different examples, or “exemplars,” from each category. After each round, the number of exemplars was doubled. In the early stages, the animals could simply memorize which exemplars belonged to each category. However, the number of exemplars eventually became too large for the animals to memorize all of them, and they began to learn the general traits that characterized each category.
By the end of the experiment, when the researchers were showing 256 novel exemplars, the monkeys were able to categorize all of them correctly.
As the monkeys shifted from rote memorization to learning the categories, the researchers saw a corresponding shift in EEG patterns. Brain waves known as “beta bands,” produced independently by the prefrontal cortex and the striatum, began to synchronize with each other. This suggests that a communication circuit is forming between the two regions, Miller says.
“There is some unknown mechanism that allows these resonance patterns to form, and these circuits start humming together,” he says. “That humming may then foster subsequent long-term plasticity changes in the brain, so real anatomical circuits can form. But the first thing that happens is they start humming together.”
A little later, as an animal nailed down the two categories, two separate circuits formed between the striatum and prefrontal cortex, each corresponding to one of the categories.
“This is the first paper that provides data suggesting that coupling in the beta-band between prefrontal cortex and striatum may play a key role in category-formation. In addition to revealing a novel mechanism involved in category-learning, the results also contribute to better understanding of the significance of coupled beta-band oscillations in the brain,” says Andreas Engel, a professor of physiology at the University Medical Center Hamburg-Eppendorf in Germany.
“Expanding your knowledge”
Previous studies have shown that during cognitively demanding tasks, there is increased synchrony between the frontal cortex and visual cortex, but Miller’s lab is the first to show specific patterns of synchrony linked to specific thoughts.
Miller and Antzoulatos also showed that once the prefrontal cortex learns the categories and sends them to the striatum, they undergo further modification as new information comes in, allowing more expansive learning to take place. This iteration can occur over and over.
“That’s how you get the open-ended nature of human thought. You keep expanding your knowledge,” Miller says. “The prefrontal cortex learning the categories isn’t the end of the game. The cortex is learning these new categories and then forming circuits that can send the categories down to the striatum as if it’s just brand-new material for the brain to elaborate on.”
In follow-up studies, the researchers are now looking at how the brain learns more abstract categories, and how activity in the striatum and prefrontal cortex might reflect that type of abstraction.

Synchronized brain waves enable rapid learning

The human mind can rapidly absorb and analyze new information as it flits from thought to thought. These quickly changing brain states may be encoded by synchronization of brain waves across different brain regions, according to a new study from MIT neuroscientists.

The researchers found that as monkeys learn to categorize different patterns of dots, two brain areas involved in learning — the prefrontal cortex and the striatum — synchronize their brain waves to form new communication circuits.

“We’re seeing direct evidence for the interactions between these two systems during learning, which hasn’t been seen before. Category-learning results in new functional circuits between these two areas, and these functional circuits are rhythm-based, which is key because that’s a relatively new concept in systems neuroscience,” says Earl Miller, the Picower Professor of Neuroscience at MIT and senior author of the study, which appears in the June 12 issue of Neuron.

There are millions of neurons in the brain, each producing its own electrical signals. These combined signals generate oscillations known as brain waves, which can be measured by electroencephalography (EEG). The research team focused on EEG patterns from the prefrontal cortex —the seat of the brain’s executive control system — and the striatum, which controls habit formation.

The phenomenon of brain-wave synchronization likely precedes the changes in synapses, or connections between neurons, believed to underlie learning and long-term memory formation, Miller says. That process, known as synaptic plasticity, is too time-consuming to account for the human mind’s flexibility, he believes.

“If you can change your thoughts from moment to moment, you can’t be doing it by constantly making new connections and breaking them apart in your brain. Plasticity doesn’t happen on that kind of time scale,” says Miller, who is a member of MIT’s Picower Institute for Learning and Memory. “There’s got to be some way of dynamically establishing circuits to correspond to the thoughts we’re having in this moment, and then if we change our minds a moment later, those circuits break apart somehow. We think synchronized brain waves may be the way the brain does it.”

The paper’s lead author is former Picower Institute postdoc Evan Antzoulatos, who is now at the University of California at Davis.

Humming together

Miller’s lab has previously shown that during category-learning, neurons in the striatum become active early, followed by slower activation of neurons in the prefrontal cortex. “The striatum learns very simple things really quickly, and then its output trains the prefrontal cortex to gradually pick up on the bigger picture,” Miller says. “The striatum learns the pieces of the puzzle, and then the prefrontal cortex puts the pieces of the puzzle together.”

In the new study, the researchers wanted to investigate whether this activity pattern actually reflects communication between the prefrontal cortex and striatum, or if each region is working independently. To do this, they measured EEG signals as monkeys learned to assign patterns of dots into one of two categories.

At first, the animals were shown just two different examples, or “exemplars,” from each category. After each round, the number of exemplars was doubled. In the early stages, the animals could simply memorize which exemplars belonged to each category. However, the number of exemplars eventually became too large for the animals to memorize all of them, and they began to learn the general traits that characterized each category.

By the end of the experiment, when the researchers were showing 256 novel exemplars, the monkeys were able to categorize all of them correctly.

As the monkeys shifted from rote memorization to learning the categories, the researchers saw a corresponding shift in EEG patterns. Brain waves known as “beta bands,” produced independently by the prefrontal cortex and the striatum, began to synchronize with each other. This suggests that a communication circuit is forming between the two regions, Miller says.

“There is some unknown mechanism that allows these resonance patterns to form, and these circuits start humming together,” he says. “That humming may then foster subsequent long-term plasticity changes in the brain, so real anatomical circuits can form. But the first thing that happens is they start humming together.”

A little later, as an animal nailed down the two categories, two separate circuits formed between the striatum and prefrontal cortex, each corresponding to one of the categories.

“This is the first paper that provides data suggesting that coupling in the beta-band between prefrontal cortex and striatum may play a key role in category-formation. In addition to revealing a novel mechanism involved in category-learning, the results also contribute to better understanding of the significance of coupled beta-band oscillations in the brain,” says Andreas Engel, a professor of physiology at the University Medical Center Hamburg-Eppendorf in Germany.

“Expanding your knowledge”

Previous studies have shown that during cognitively demanding tasks, there is increased synchrony between the frontal cortex and visual cortex, but Miller’s lab is the first to show specific patterns of synchrony linked to specific thoughts.

Miller and Antzoulatos also showed that once the prefrontal cortex learns the categories and sends them to the striatum, they undergo further modification as new information comes in, allowing more expansive learning to take place. This iteration can occur over and over.

“That’s how you get the open-ended nature of human thought. You keep expanding your knowledge,” Miller says. “The prefrontal cortex learning the categories isn’t the end of the game. The cortex is learning these new categories and then forming circuits that can send the categories down to the striatum as if it’s just brand-new material for the brain to elaborate on.”

In follow-up studies, the researchers are now looking at how the brain learns more abstract categories, and how activity in the striatum and prefrontal cortex might reflect that type of abstraction.

Filed under brainwaves learning prefrontal cortex striatum neuroscience science

142 notes

With the right rehabilitation, paralyzed rats learn to grip again
After a large stroke, motor skills barely improve, even with rehabilitation. An experiment conducted on rats demonstrates that a course of therapy combining the stimulation of nerve fiber growth with drugs and motor training can be successful. The key, however, is the correct sequence: Paralyzed animals only make an almost complete recovery if the training is delayed until after the growth promoting drugs have been administered, as researchers from the University of Zurich, ETH Zurich and the University of Heidelberg reveal.
Only if the timing, dosage and kind of rehabilitation are right can motor functions make an almost full recovery after a large stroke. Rats that were paralyzed down one side by a stroke almost managed to regain their motor functions fully if they were given the ideal combination of rehabilitative training and substances that boosted the growth of nerve fibers. Anatomical studies confirmed the importance of the right rehabilitation schedule: Depending on the therapeutic design, different patterns of new nerve fibers that sprouted into the cervical spinal cord from the healthy part of the brain and thus aid functional recovery to varying degrees were apparent. The study conducted by an interdisciplinary team headed by Professor Martin Schwab from the Brain Research Institute at the University of Zurich and ETH Zurich’s Neuroscience Center is another milestone in research on the repair of brain and spinal cord injuries.
“This new rehabilitative approach at least triggered an astonishing recovery of the motor skills in rats, which may become important for the treatment of stroke patients in the future,” says first author Anna-Sophia Wahl. At present, patients have to deal with often severe motor-function, language and vision problems, and their quality of life is often heavily affected.
Allow nerves to grow first, then train 
On the one hand, the treatment of rats after a stroke involves specific immune therapy, where so-called Nogo proteins are blocked with antibodies. These proteins in the tissue around the nerve fibers inhibit nerve-fiber growth. If they are blocked, nerve fibers begin to sprout in the injured sections of the brain and spinal cord and relay nerve impulses again. On the other hand, the stroke animals, whose front legs were paralyzed, underwent physical training – namely, gripping food pellets. All the rats received antibody treatment first to boost nerve-fiber growth and – either at the same time or only afterwards – motor training. The results are surprising: The animals that began their training later regained a remarkable 85 percent of their original motor skills. For the rats that were trained straight after the stroke in parallel with the growth-enhancing antibodies, however, it was a different story: At 15 percent, their physical performance in the grip test remained very low.
On the one hand, the treatment of rats after a stroke involves specific immune therapy, where so-called Nogo proteins are blocked with antibodies. These proteins in the tissue around the nerve fibers inhibit nerve-fiber growth. If they are blocked, nerve fibers begin to sprout in the injured sections of the brain and spinal cord and relay nerve impulses again. On the other hand, the stroke animals, whose front legs were paralyzed, underwent physical training – namely, gripping food pellets. All the rats received antibody treatment first to boost nerve-fiber growth and – either at the same time or only afterwards – motor training. The results are surprising: The animals that began their training later regained a remarkable 85 percent of their original motor skills. For the rats that were trained straight after the stroke in parallel with the growth-enhancing antibodies, however, it was a different story: At 15 percent, their physical performance in the grip test remained very low.
Meticulous design very promising
The researchers consider timing a crucial factor for the success of the rehabilitation: An early application of growth stimulators – such as antibodies against the protein Nogo-A – triggers an increased sprouting and growth of nerve fibers. The subsequent training is essential to sift out and stabilize the key neural circuits for the recovery of the motor functions. For instance, an automatic, computer-based analysis of the anatomical data from the imaging revealed that new fibers in the spinal cord sprouted in another pattern depending on the course of treatment. By reversibly deactivating the new nerve fibers that grow, the neurobiologists were ultimately able to demonstrate for the first time that a group of these fibers is essential for the recovery of the motor function observed: Nerve fibers that grew into the spinal cord from the intact front half of the brain – changing sides – can reconnect the spinal cord circuits of the rats’ paralyzed limbs to the brain, enabling the animals to grip again.    
“Our study reveals how important a meticulous therapeutic design is for the most successful rehabilitation possible,” sums up study head Martin Schwab. “The brain has enormous potential for the reorganization and reestablishment of its functions. With the right therapies at the right time, this can be increased in a targeted fashion.
Literature:
Wahl, A.S., Omlor, W., Rubio, J.C., Chen, J.L., Zheng, H., Schröter, A., Gullo, M., Weinmann, O., Kobayashi, K., Helmchen, F., Ommer, B., Schwab, M.E. Asynchronous therapy restores motor control by rewiring of the rat corticospinal tract after stroke. Science, June 13, 2014.

With the right rehabilitation, paralyzed rats learn to grip again

After a large stroke, motor skills barely improve, even with rehabilitation. An experiment conducted on rats demonstrates that a course of therapy combining the stimulation of nerve fiber growth with drugs and motor training can be successful. The key, however, is the correct sequence: Paralyzed animals only make an almost complete recovery if the training is delayed until after the growth promoting drugs have been administered, as researchers from the University of Zurich, ETH Zurich and the University of Heidelberg reveal.

Only if the timing, dosage and kind of rehabilitation are right can motor functions make an almost full recovery after a large stroke. Rats that were paralyzed down one side by a stroke almost managed to regain their motor functions fully if they were given the ideal combination of rehabilitative training and substances that boosted the growth of nerve fibers. Anatomical studies confirmed the importance of the right rehabilitation schedule: Depending on the therapeutic design, different patterns of new nerve fibers that sprouted into the cervical spinal cord from the healthy part of the brain and thus aid functional recovery to varying degrees were apparent. The study conducted by an interdisciplinary team headed by Professor Martin Schwab from the Brain Research Institute at the University of Zurich and ETH Zurich’s Neuroscience Center is another milestone in research on the repair of brain and spinal cord injuries.

“This new rehabilitative approach at least triggered an astonishing recovery of the motor skills in rats, which may become important for the treatment of stroke patients in the future,” says first author Anna-Sophia Wahl. At present, patients have to deal with often severe motor-function, language and vision problems, and their quality of life is often heavily affected.

Allow nerves to grow first, then train

On the one hand, the treatment of rats after a stroke involves specific immune therapy, where so-called Nogo proteins are blocked with antibodies. These proteins in the tissue around the nerve fibers inhibit nerve-fiber growth. If they are blocked, nerve fibers begin to sprout in the injured sections of the brain and spinal cord and relay nerve impulses again. On the other hand, the stroke animals, whose front legs were paralyzed, underwent physical training – namely, gripping food pellets. All the rats received antibody treatment first to boost nerve-fiber growth and – either at the same time or only afterwards – motor training. The results are surprising: The animals that began their training later regained a remarkable 85 percent of their original motor skills. For the rats that were trained straight after the stroke in parallel with the growth-enhancing antibodies, however, it was a different story: At 15 percent, their physical performance in the grip test remained very low.

On the one hand, the treatment of rats after a stroke involves specific immune therapy, where so-called Nogo proteins are blocked with antibodies. These proteins in the tissue around the nerve fibers inhibit nerve-fiber growth. If they are blocked, nerve fibers begin to sprout in the injured sections of the brain and spinal cord and relay nerve impulses again. On the other hand, the stroke animals, whose front legs were paralyzed, underwent physical training – namely, gripping food pellets. All the rats received antibody treatment first to boost nerve-fiber growth and – either at the same time or only afterwards – motor training. The results are surprising: The animals that began their training later regained a remarkable 85 percent of their original motor skills. For the rats that were trained straight after the stroke in parallel with the growth-enhancing antibodies, however, it was a different story: At 15 percent, their physical performance in the grip test remained very low.

Meticulous design very promising

The researchers consider timing a crucial factor for the success of the rehabilitation: An early application of growth stimulators – such as antibodies against the protein Nogo-A – triggers an increased sprouting and growth of nerve fibers. The subsequent training is essential to sift out and stabilize the key neural circuits for the recovery of the motor functions. For instance, an automatic, computer-based analysis of the anatomical data from the imaging revealed that new fibers in the spinal cord sprouted in another pattern depending on the course of treatment. By reversibly deactivating the new nerve fibers that grow, the neurobiologists were ultimately able to demonstrate for the first time that a group of these fibers is essential for the recovery of the motor function observed: Nerve fibers that grew into the spinal cord from the intact front half of the brain – changing sides – can reconnect the spinal cord circuits of the rats’ paralyzed limbs to the brain, enabling the animals to grip again.    

“Our study reveals how important a meticulous therapeutic design is for the most successful rehabilitation possible,” sums up study head Martin Schwab. “The brain has enormous potential for the reorganization and reestablishment of its functions. With the right therapies at the right time, this can be increased in a targeted fashion.

Literature:

Wahl, A.S., Omlor, W., Rubio, J.C., Chen, J.L., Zheng, H., Schröter, A., Gullo, M., Weinmann, O., Kobayashi, K., Helmchen, F., Ommer, B., Schwab, M.E. Asynchronous therapy restores motor control by rewiring of the rat corticospinal tract after stroke. Science, June 13, 2014.

Filed under stroke motor function motor control rehabilitation nerve fibers neuroscience science

free counters