Neuroscience

Articles and news from the latest research reports.

Posts tagged science

91 notes

Gene-silencing study finds new targets for Parkinson’s disease
Scientists at the National Institutes of Health have used RNA interference (RNAi) technology to reveal dozens of genes which may represent new therapeutic targets for treating Parkinson’s disease. The findings also may be relevant to several diseases caused by damage to mitochondria, the biological power plants found in cells throughout the body.
"We discovered a network of genes that may regulate the disposal of dysfunctional mitochondria, opening the door to new drug targets for Parkinson’s disease and other disorders," said Richard Youle, Ph.D., an investigator at the National Institute of Neurological Disorders and Stroke (NINDS) and a leader of the study. The findings were published online in Nature. Dr. Youle collaborated with researchers from the National Center for Advancing Translational Sciences (NCATS).
Mitochondria are tubular structures with rounded ends that use oxygen to convert many chemical fuels into adenosine triphosphate, the main energy source that powers cells. Multiple neurological disorders are linked to genes that help regulate the health of mitochondria, including Parkinson’s, and movement diseases such as Charcot-Marie Tooth Syndrome and the ataxias.
Some cases of Parkinson’s disease have been linked to mutations in the gene that codes for parkin, a protein that normally roams inside cells, and tags damaged mitochondria as waste. The damaged mitochondria are then degraded by cells’ lysosomes, which serve as a biological trash disposal system. Known mutations in parkin prevent tagging, resulting in accumulation of unhealthy mitochondria in the body.
RNAi is a natural process occurring in cells that helps regulate genes. Since its discovery in 1998, scientists have used RNAi as a tool to investigate gene function and their involvement in health and disease.
Dr. Youle and his colleagues worked with Scott Martin, Ph.D., a coauthor of the paper and an NCATS researcher who is in charge of NIH’s RNAi facility. The RNAi group used robotics to introduce small interfering RNAs (siRNAs) into human cells to individually turn off nearly 22,000 genes. They then used automated microscopy to examine how silencing each gene affected the ability of parkin to tag mitochondria.
"One of NCATS’ goals is to develop, leverage and improve innovative technologies, such as RNAi screening, which is used in collaborations across NIH to increase our knowledge of gene function in the context of human disease," said Dr. Martin.
For this study, the researchers used RNAi to screen human cells to identify genes that help parkin tag damaged mitochondria. They found that at least four genes, called TOMM7, HSPAI1L, BAG4 and SIAH3, may act as helpers. Turning off some genes, such as TOMM7 and HSPAI1L, inhibited parkin tagging whereas switching off other genes, including BAG4 and SIAH3, enhanced tagging. Previous studies showed that many of the genes encode proteins that are found in mitochondria or help regulate a process called ubiquitination, which controls protein levels in cells.
Next the researchers tested one of the genes in human nerve cells. The researchers used a process called induced pluripotent stem cell technology to create the cells from human skin. Turning off the TOMM7 gene in nerve cells also appeared to inhibit tagging of mitochondria. Further experiments supported the idea that these genes may be new targets for treating neurological disorders.
"These genes work like quality control agents in a variety of cell types, including neurons," said Dr. Youle. "The identification of these helper genes provides the research community with new information that may improve our understanding of Parkinson’s disease and other neurological disorders."
The RNAi screening data from this study are available in NIH’s public database, PubChem, which any researcher may analyze for additional information about the role of dysfunctional mitochondria in neurological disorders.
"This study shows how the latest high-throughput genetic technologies can rapidly reveal insights into fundamental disease mechanisms," said Story Landis, Ph.D., director of the NINDS. "We hope the results will help scientists around the world find new treatments for these devastating disorders."

Gene-silencing study finds new targets for Parkinson’s disease

Scientists at the National Institutes of Health have used RNA interference (RNAi) technology to reveal dozens of genes which may represent new therapeutic targets for treating Parkinson’s disease. The findings also may be relevant to several diseases caused by damage to mitochondria, the biological power plants found in cells throughout the body.

"We discovered a network of genes that may regulate the disposal of dysfunctional mitochondria, opening the door to new drug targets for Parkinson’s disease and other disorders," said Richard Youle, Ph.D., an investigator at the National Institute of Neurological Disorders and Stroke (NINDS) and a leader of the study. The findings were published online in Nature. Dr. Youle collaborated with researchers from the National Center for Advancing Translational Sciences (NCATS).

Mitochondria are tubular structures with rounded ends that use oxygen to convert many chemical fuels into adenosine triphosphate, the main energy source that powers cells. Multiple neurological disorders are linked to genes that help regulate the health of mitochondria, including Parkinson’s, and movement diseases such as Charcot-Marie Tooth Syndrome and the ataxias.

Some cases of Parkinson’s disease have been linked to mutations in the gene that codes for parkin, a protein that normally roams inside cells, and tags damaged mitochondria as waste. The damaged mitochondria are then degraded by cells’ lysosomes, which serve as a biological trash disposal system. Known mutations in parkin prevent tagging, resulting in accumulation of unhealthy mitochondria in the body.

RNAi is a natural process occurring in cells that helps regulate genes. Since its discovery in 1998, scientists have used RNAi as a tool to investigate gene function and their involvement in health and disease.

Dr. Youle and his colleagues worked with Scott Martin, Ph.D., a coauthor of the paper and an NCATS researcher who is in charge of NIH’s RNAi facility. The RNAi group used robotics to introduce small interfering RNAs (siRNAs) into human cells to individually turn off nearly 22,000 genes. They then used automated microscopy to examine how silencing each gene affected the ability of parkin to tag mitochondria.

"One of NCATS’ goals is to develop, leverage and improve innovative technologies, such as RNAi screening, which is used in collaborations across NIH to increase our knowledge of gene function in the context of human disease," said Dr. Martin.

For this study, the researchers used RNAi to screen human cells to identify genes that help parkin tag damaged mitochondria. They found that at least four genes, called TOMM7, HSPAI1L, BAG4 and SIAH3, may act as helpers. Turning off some genes, such as TOMM7 and HSPAI1L, inhibited parkin tagging whereas switching off other genes, including BAG4 and SIAH3, enhanced tagging. Previous studies showed that many of the genes encode proteins that are found in mitochondria or help regulate a process called ubiquitination, which controls protein levels in cells.

Next the researchers tested one of the genes in human nerve cells. The researchers used a process called induced pluripotent stem cell technology to create the cells from human skin. Turning off the TOMM7 gene in nerve cells also appeared to inhibit tagging of mitochondria. Further experiments supported the idea that these genes may be new targets for treating neurological disorders.

"These genes work like quality control agents in a variety of cell types, including neurons," said Dr. Youle. "The identification of these helper genes provides the research community with new information that may improve our understanding of Parkinson’s disease and other neurological disorders."

The RNAi screening data from this study are available in NIH’s public database, PubChem, which any researcher may analyze for additional information about the role of dysfunctional mitochondria in neurological disorders.

"This study shows how the latest high-throughput genetic technologies can rapidly reveal insights into fundamental disease mechanisms," said Story Landis, Ph.D., director of the NINDS. "We hope the results will help scientists around the world find new treatments for these devastating disorders."

Filed under parkinson's disease mitochondria genes RNA interference parkin neuroscience science

280 notes

Scientists find brain region that helps you make up your mind
One of the smallest parts of the brain is getting a second look after new research suggests it plays a crucial role in decision making.
A University of British Columbia study published today in Nature Neuroscience says the lateral habenula, a region of the brain linked to depression and avoidance behaviours, has been largely misunderstood and may be integral in cost-benefit decisions.
“These findings clarify the brain processes involved in the important decisions that we make on a daily basis, from choosing between job offers to deciding which house or car to buy,” says Prof. Stan Floresco of UBC’s Dept. of Psychology and Brain Research Centre (BRC). “It also suggests that the scientific community has misunderstood the true functioning of this mysterious, but important, region of the brain.”
In the study, scientists trained lab rats to choose between a consistent small reward (one food pellet) or a potentially larger reward (four food pellets) that appeared sporadically. Like humans, the rats tended to choose larger rewards when costs—in this case, the amount of time they had to wait before receiving food–were low and preferred smaller rewards when such risks were higher.
Previous studies suggest that turning off the lateral habenula would cause rats to choose the larger, riskier reward more often, but that was not the case. Instead, the rats selected either option at random, no longer showing the ability to choose the best option for them.
The findings have important implications for depression treatment. “Deep brain stimulation – which is thought to inactivate the lateral habenula — has been reported to improve depressive symptoms in humans,” Floresco says. “But our findings suggest these improvements may not be because patients feel happier. They may simply no longer care as much about what is making them feel depressed.”
Background
Floresco, who conducted the study with PhD candidate Colin Stopper, says more investigation is needed to understand the complete brain functions involved in cost-benefit decision processes and related behaviour. A greater understanding of decision-making processes is also crucial, they say, because many psychiatric disorders, such as schizophrenia, stimulant abuse and depression, are associated with impairments in these processes.
The lateral habenula is considered one of the oldest regions of the brain, evolution-wise, the researchers say.

Scientists find brain region that helps you make up your mind

One of the smallest parts of the brain is getting a second look after new research suggests it plays a crucial role in decision making.

A University of British Columbia study published today in Nature Neuroscience says the lateral habenula, a region of the brain linked to depression and avoidance behaviours, has been largely misunderstood and may be integral in cost-benefit decisions.

“These findings clarify the brain processes involved in the important decisions that we make on a daily basis, from choosing between job offers to deciding which house or car to buy,” says Prof. Stan Floresco of UBC’s Dept. of Psychology and Brain Research Centre (BRC). “It also suggests that the scientific community has misunderstood the true functioning of this mysterious, but important, region of the brain.”

In the study, scientists trained lab rats to choose between a consistent small reward (one food pellet) or a potentially larger reward (four food pellets) that appeared sporadically. Like humans, the rats tended to choose larger rewards when costs—in this case, the amount of time they had to wait before receiving food–were low and preferred smaller rewards when such risks were higher.

Previous studies suggest that turning off the lateral habenula would cause rats to choose the larger, riskier reward more often, but that was not the case. Instead, the rats selected either option at random, no longer showing the ability to choose the best option for them.

The findings have important implications for depression treatment. “Deep brain stimulation – which is thought to inactivate the lateral habenula — has been reported to improve depressive symptoms in humans,” Floresco says. “But our findings suggest these improvements may not be because patients feel happier. They may simply no longer care as much about what is making them feel depressed.”

Background

Floresco, who conducted the study with PhD candidate Colin Stopper, says more investigation is needed to understand the complete brain functions involved in cost-benefit decision processes and related behaviour. A greater understanding of decision-making processes is also crucial, they say, because many psychiatric disorders, such as schizophrenia, stimulant abuse and depression, are associated with impairments in these processes.

The lateral habenula is considered one of the oldest regions of the brain, evolution-wise, the researchers say.

Filed under decision making lateral habenula depression brain neuroscience science

135 notes

Multibeam femtosecond optical transfection for the ultimate brain interface
The robotic brain surgeon, featured in the 2013 movie “Enders Game” is no fictional brain-fixing machine. The open-source surgical platform, known as Raven II, has already starred in several brain procedures to date. It is not too hard now to imagine machines like this eventually installing brain controlled interfaces (BCIs). What is missing from this futuristic vision, is what happens at the business end, where the bots meet the brain. This unfolding drama, which began with crude electrode array stimulation, now parlays a combination of optical technologies that permits both transfection of neurons with interface machinery, and their subsequent control. A huge advance in automating the transfection part, and reducing the time it takes by orders of magnitude, has been reported today in Nature’s Scientific Reports by a Scottish group from the University of Saint Andrews. Their new technology delivers DNA plasmids containing optical indicators and ion channels to individual neurons using arrays of femtosecond laser beams—and they can do this as fast as they can reach out and touch the neuron profiles on the screen in front of them.
Femtosecond laser pulses, by concentrating optical power into a short interval, combine exacting control with a minimum use of power. By implication, there is also a minimum of damage to surrounding tissue due to errant or otherwise prolonged irradiation. One difficulty with femtosecond lasers has been that an exotic system of free-space beam delivery optics is often called for. This is because the short pulses are significantly transformed by passage through standard fiber optics. As the authors now show, off-the-shelf instruments, like two-photon scanning or uncaging microscopes can be readily modified to perform fast, automated laser persuasion of cell membranes to allow DNA to slip inside.

In order to deliver various molecular constructs to single cells, protocols including manual injection, modified patch-clamping, lipofection, and electroporation have been developed. Unfortunately, these methods do not scale well if you want to hotwire a bunch of cells in a short time. Transfecting neighboring cells with different reporters or channels, or alternatively the same cell but sequentially with different elements, would be off the table with these methods. Trying to transfect neurons in the brain rather than large egg cells, and using naked DNA rather vector-based DNA, or RNA, involves additional considerations.

Using their custom-developed touchscreen, and image-guided femtobeam, the researchers were able to target up to 100 cells per minute. At a maximum recommended beam power of 77 milliWatts, they could also target a 4x4 array of points (on a 4um grid) to deliver 12-200 femtosecond pulses over 60 ms metapulse intervals. Depending on the specifics of the protocol, transfection yields from 50-100 percent could be obtained. These numbers were for dividing cells in which the nuclear membrane is transiently dispersed and therefore doesn’t present an additional barier to the DNA. For neurons, the researchers added a nuclear membrane-targeted peptide (Nupherin), that binds with the plasmid DNA and enhances transport. In further experiments with these neurons, they successfully activated the transfected channel rhodopsin protein using blue light, and recorded subsequently evoked spikes via patch clamp.
To really squeeze the technique into greater productivity, the researchers hope to implement spatial light modulators for precise and independent control of multiple beams. For an vivo or behaving scenario, the researchers point to fairly recent work where fiber based femotosecond transfection has been made to work in CHO-K1 cells at efficiencies of 74 percent. Using a compact, endoscope-like system with 6000 individual cores, this “nanosurgical instrument” was also used for simultaneous microfluidic delivery of drug to localized areas under direct imaging.

I asked lead author Maciej Antokowiak whether he thought there would be significant distortion in migrating to fiber-based delivery. He said that at 200fs, pulse stretching is much less of a concern than for the shorter 12-20 fs pulses. He also mentioned that in the high repetition regime (76MHz) femtosecond transfection appears to involve cumulative biochemical changes in the cell membrane.
Astounding reports of so-called glowing memories have also been trickling in this week along with the larger wake from the recent Society for Neuroscience meeting. This kind of selective optical interrogation of complete circuits in the brain will take mere connectomics into full-blown activity maps, and then, to control. As it has become apparent through omni-labelling techniques like Brainbow I and II, total label of the synaptic jungle is hardly better than no label. The ability to pick and choose multiple combinatorial activators or other modifiers, by finger or algorithm, as a prelude to thought itself, will be the quickest path to workable BCIs and our subsequent understanding of the brain.

Multibeam femtosecond optical transfection for the ultimate brain interface

The robotic brain surgeon, featured in the 2013 movie “Enders Game” is no fictional brain-fixing machine. The open-source surgical platform, known as Raven II, has already starred in several brain procedures to date. It is not too hard now to imagine machines like this eventually installing brain controlled interfaces (BCIs). What is missing from this futuristic vision, is what happens at the business end, where the bots meet the brain. This unfolding drama, which began with crude electrode array stimulation, now parlays a combination of optical technologies that permits both transfection of neurons with interface machinery, and their subsequent control. A huge advance in automating the transfection part, and reducing the time it takes by orders of magnitude, has been reported today in Nature’s Scientific Reports by a Scottish group from the University of Saint Andrews. Their new technology delivers DNA plasmids containing optical indicators and ion channels to individual neurons using arrays of femtosecond laser beams—and they can do this as fast as they can reach out and touch the neuron profiles on the screen in front of them.

Femtosecond laser pulses, by concentrating optical power into a short interval, combine exacting control with a minimum use of power. By implication, there is also a minimum of damage to surrounding tissue due to errant or otherwise prolonged irradiation. One difficulty with femtosecond lasers has been that an exotic system of free-space beam delivery optics is often called for. This is because the short pulses are significantly transformed by passage through standard fiber optics. As the authors now show, off-the-shelf instruments, like two-photon scanning or uncaging microscopes can be readily modified to perform fast, automated laser persuasion of cell membranes to allow DNA to slip inside.

In order to deliver various molecular constructs to single cells, protocols including manual injection, modified patch-clamping, lipofection, and electroporation have been developed. Unfortunately, these methods do not scale well if you want to hotwire a bunch of cells in a short time. Transfecting neighboring cells with different reporters or channels, or alternatively the same cell but sequentially with different elements, would be off the table with these methods. Trying to transfect neurons in the brain rather than large egg cells, and using naked DNA rather vector-based DNA, or RNA, involves additional considerations.

Using their custom-developed touchscreen, and image-guided femtobeam, the researchers were able to target up to 100 cells per minute. At a maximum recommended beam power of 77 milliWatts, they could also target a 4x4 array of points (on a 4um grid) to deliver 12-200 femtosecond pulses over 60 ms metapulse intervals. Depending on the specifics of the protocol, transfection yields from 50-100 percent could be obtained. These numbers were for dividing cells in which the nuclear membrane is transiently dispersed and therefore doesn’t present an additional barier to the DNA. For neurons, the researchers added a nuclear membrane-targeted peptide (Nupherin), that binds with the plasmid DNA and enhances transport. In further experiments with these neurons, they successfully activated the transfected channel rhodopsin protein using blue light, and recorded subsequently evoked spikes via patch clamp.

To really squeeze the technique into greater productivity, the researchers hope to implement spatial light modulators for precise and independent control of multiple beams. For an vivo or behaving scenario, the researchers point to fairly recent work where fiber based femotosecond transfection has been made to work in CHO-K1 cells at efficiencies of 74 percent. Using a compact, endoscope-like system with 6000 individual cores, this “nanosurgical instrument” was also used for simultaneous microfluidic delivery of drug to localized areas under direct imaging.

I asked lead author Maciej Antokowiak whether he thought there would be significant distortion in migrating to fiber-based delivery. He said that at 200fs, pulse stretching is much less of a concern than for the shorter 12-20 fs pulses. He also mentioned that in the high repetition regime (76MHz) femtosecond transfection appears to involve cumulative biochemical changes in the cell membrane.

Astounding reports of so-called glowing memories have also been trickling in this week along with the larger wake from the recent Society for Neuroscience meeting. This kind of selective optical interrogation of complete circuits in the brain will take mere connectomics into full-blown activity maps, and then, to control. As it has become apparent through omni-labelling techniques like Brainbow I and II, total label of the synaptic jungle is hardly better than no label. The ability to pick and choose multiple combinatorial activators or other modifiers, by finger or algorithm, as a prelude to thought itself, will be the quickest path to workable BCIs and our subsequent understanding of the brain.

Filed under Raven II ion channels femtosecond laser optogenetics neurons nupherin neuroscience science

101 notes

Study looks at better prediction for epileptic seizures through adaptive learning approach
A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.
The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.
Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.
“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”
Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.
Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.
Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.
The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.
Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.
“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.
Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.
“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Study looks at better prediction for epileptic seizures through adaptive learning approach

A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.

The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.

Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.

“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”

Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.

Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.

Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.

The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.

Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.

“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.

Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.

“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Filed under epileptic seizure adaptive learning epilepsy EEG medicine technology neuroscience science

137 notes

Who learns from the carrot, and who from the stick?

To flexibly deal with our ever-changing world, we need to learn from both the negative and positive consequences of our behaviour. In other words, from punishment and reward. Hanneke den Ouden from the Donders Institute in Nijmegen demonstrated that serotonin and dopamine related genes influence how we base our choices on past punishments or rewards. This influence depends on which gene variant you inherited from your parents. These results were published in Neuron on 20 November.

The brain chemicals dopamine and serotonin partly determine our sensitivity to reward and punishment. At least, this was a common assumption. Hanneke den Ouden and Roshan Cools investigated this assumption together with colleagues from the Donders Institute and New York University. Den Ouden explains: ‘We used a simple computer game to test the genetic influence of the genes DAT1 and SERT, as these genes influence dopamine and serotonin. We discovered that the dopamine gene affects how we learn from the long-term consequences of our choices, while the serotonin gene affects our choices in the short term.’

Online game

‘In nearly 700 people we analysed which variant of the SERT and the DAT1 genes they had’, Den Ouden describes. ‘Using an online game, we investigated how well people are able to adjust their choice strategy after receiving a reward or a punishment.’ The players would repeatedly choose one of two symbols. Symbol A usually resulted in a reward whereas symbol B usually resulted in punishment. Halfway through the game, these rules were reversed. The game allowed the researchers to measure how flexible people are in adjusting their choices when the rules change. But it also showed whether people impulsively change their choice when the computer happened to give misleading feedback.

Different genes, different strategies

Den Ouden: ‘Different players use different strategies, which depend on their genetic material. People’s tendency to change their choice immediately after receiving a punishment depends on which serotonin gene variant they inherited from their parents. The dopamine gene variant, on the other hand, exerts influence on whether people can stop themselves making the choice that was previously rewarded, but no longer is.’

This study shows that dopamine and serotonin are important for different forms of flexibility associated with receiving reward and punishment. Many neuropsychiatric disorders caused by abnormal dopamine and/or serotonin levels are associated with forms of inflexibility, for example addiction, anxiety, or Parkinson’s disease. So this study not only tells us more about the heritability of our choice behaviour; a better understanding of the relationship between brain chemicals and behaviour in healthy people will ultimately help to provide us with better insight into these neuropsychiatric disorders.

(Source: ru.nl)

Filed under serotonin dopamine reward punishment learning neuroscience science

95 notes

Rare disease yields clues about broader brain pathology

Alexander disease is a devastating brain disease that almost nobody has heard of — unless someone in the family is afflicted with it. Alexander disease strikes young or old, and in children destroys white matter in the front of the brain. Many patients, especially those with early onset, have significant intellectual disabilities.

image

(Image: A mutant gene that causes the deadly Alexander disease creates an overgrowth of the protein GFAP in mouse brain cells called astrocytes (right) compared to normal brain cells (left))

Regardless of the age when it begins, Alexander disease is always fatal. It typically results from mutations in a gene known as GFAP (glial fibrillary acidic protein), leading to the formation of fibrous clumps of protein inside brain cells called astrocytes.

Classically, astrocytes and other glial cells were considered “helpers” that nourish and protect the neurons that do the actual communication. But in recent years, it’s become clear that glial cells are much more than passive bystanders, and may be active culprits in many neurological diseases.

Now, in a report in the Journal of Neuroscience, researchers at UW-Madison show that Alexander disease also affects neurons, and in a way that impacts several measures of learning and memory.

Mice were engineered to contain the same mutation in GFAP that is found in human patients. Their astrocytes spontaneously increased production of GFAP, the same response found after many types of injury or disease in the brain. In Alexander disease, the result is an increase in mutant GFAP that is “toxic to the cell, and unfortunately astrocytes respond by making more GFAP,” says first author Tracy Hagemann, an associate scientist with the university’s Waisman Center.

While GFAP is usually found in astrocytes, it also occurs in neural stem cells, a population of cells that persist in some areas of the brain to continually spawn new neurons throughout adulthood. In the mouse versions of Alexander disease, neural stem cells are present, but they fail to develop into neurons, Hagemann says. “Think of a garden where your green beans never sprouted. Was it too cold for them to sprout, or was there another problem? Something similar is happening with these neural stem cells. They are present, but inert, and we’re not sure why.”

The shortage of new neurons could explain why the mice with excess GFAP failed a test that required them to remember the location of a submerged platform in a tub of water.

The report is “the first to suggest that the problems in Alexander disease extend beyond just the white matter and astrocytes, and may provide a clue to the problems with learning and memory that are such prominent features in the human disease,” says lab leader Albee Messing, a professor of comparative biosciences in the UW School of Veterinary Medicine.

One immediate question that the team will try to answer is whether the same defect in stem cells can be found in autopsy samples stored over many years to allow just this kind of investigation.

Still to be clarified is whether the mutation affects the neural stem cells directly, or whether it acts through other astrocytes that are nearby. “We do know that the astrocytes become activated with this GFAP mutation,” Hagemann says. “That activation — a kind of inflammation — could be making the environment hostile to young neurons. Or the mutation could be changing the neural stem cells themselves in some other way.

"Medicine advances by teasing things apart," says Hagemann. "A single mutation can work in different ways — through different chains of cause and effect leading to different symptoms of a disease. In this case it’s like the old question of nature versus nurture. Was the stem cell born bad — was it genetically doomed? Or were the reactive astrocytes in the neighborhood a toxic influence? Or both? This is an important question for Alexander disease and other brain deteriorating disorders, especially with the current focus on stem cells as a source for new neurons and therapy."

Already, the Waisman group is screening drugs that might slow GFAP production. Eventually, Hagemann says, the work may illuminate the role of astrocyte dysfunction in other neural diseases featuring aggregates of misformed proteins, including ALS, Parkinson’s, and Alzheimer’s disease.

(Source: news.wisc.edu)

Filed under alexander disease astrocytes gene mutation glial cells GFAP neuroscience science

166 notes

Natural Compound Mitigates Effects of Methamphetamine Abuse

Studies have shown that resveratrol, a natural compound found in colored vegetables, fruits and especially grapes, may minimize the impact of Parkinson’s disease, stroke and Alzheimer’s disease in those who maintain healthy diets or who regularly take resveratrol supplements. Now, researchers at the University of Missouri have found that resveratrol may also block the effects of the highly addictive drug, methamphetamine.

image

(Image: Wikipedia)

Dennis Miller, associate professor in the Department of Psychological Sciences in the College of Arts & Science and an investigator with the Bond Life Sciences Center, and researchers in the Center for Translational Neuroscience at MU, study therapies for drug addiction and neurodegenerative disorders. Their research targets treatments for methamphetamine abuse and has focused on the role of the neurotransmitter dopamine in drug addiction. Dopamine levels in the brain surge after methamphetamine use; this increase is associated with the motivation to continue using the drug, despite its adverse consequences. However, with repeated methamphetamine use, dopamine neurons may degenerate causing neurological and behavioral impairments, similar to those observed in people with Parkinson’s disease.

“Dopamine is critical to the development of methamphetamine addiction—the transition from using a drug because one likes or enjoys it to using the drug because one craves or compulsively uses it,” Miller said. “Resveratrol has been shown to regulate these dopamine neurons and to be protective in Parkinson’s disease, a disorder where dopamine neurons degenerate; therefore, we sought to determine if resveratrol could affect methamphetamine-induced changes in the brain.”

Using procedures established by Parkinson’s and Alzheimer’s disease research, rats received resveratrol once a day for seven days in about the same concentration as a human would receive from a healthy diet. After a week of resveratrol, researchers measured how much dopamine was released by methamphetamine. Researchers found that resveratrol significantly diminished methamphetamine’s ability to increase dopamine levels in the brain. Furthermore, resveratrol diminished methamphetamine’s ability to increase activity in mice, a behavior that models the hyperactivity observed in people that use the stimulant.

“People are encouraged by physicians and dieticians to include resveratrol-containing products in their diet and protection against methamphetamine’s harmful effects may be an added bonus,” Miller said. “Additionally, there are no consistently effective treatments to help people who are dependent on methamphetamine. Our initial research suggests that resveratrol could be included in a treatment regimen for those addicted to methamphetamine and it has potential to decrease the craving and desire for the drug. Resveratrol is found in good, colorful foods, and has few side effects. We all ought to consume resveratrol for good brain health; our research suggests it may also prevent the changes in the brain that occur with the development of drug addiction.”

(Source: munews.missouri.edu)

Filed under resveratrol methamphetamine drug addiction dopamine neurodegenerative diseases neuroscience science

54 notes

Attractants prevent nerve cell migration

A vision is to implant nerve precursor cells in the diseased brains of patients with Parkinson’s and Huntington’s diseases, whereby these cells are to assume the function of the cells that have died off. However, the implanted nerve cells frequently do not migrate as hoped, rather they hardly move from the site. Scientists at the Institute for Reconstructive Neurobiology at Bonn University have now discovered an important cause of this: Attractants secreted by the precursor cells prevent the maturing nerve cells from migrating into the brain. The results are presented in the journal “Nature Neuroscience.”

One approach for treating patients with Parkinson’s or Huntington’s disease is to replace defective brain cells with fresh cells. To do this, immature precursor cells from neurons are implanted into the diseased brains; these cells are to then mature on-site and take over the function of the defective cells. “However, it has been shown again and again that the nerve cells generated by the transplant barely migrate into the brain but remain largely confined to the implant site,” says Prof. Dr. Oliver Brüstle, Director of the Institute for Reconstructive Neurobiology at Bonn University. Scientists have believed for a long time that this effect is associated with the fact that in the mature brain, there are unfavorable conditions for the uptake of additional nerve cells.

Immature and more mature nerve cells attract each other like magnets

The researchers from the Institute for Reconstructive Neurobiology of Bonn University have now discovered a fully unexpected mechanism to which the deficient migratory behavior of the graft-derived neurons can be attributed. The implanted cells mature at different rates and thus there is a mixture of the two stages. “Like magnets, the precursor cells which are still largely immature attract the nerve cells which have already matured further, which is why there is a sort of agglomeration,” says lead author Dr. Julia Ladewig, who was recently awarded a research prize of 1.25 million Euro by the North Rhine-Westphalian Stem Cell Network, which is supported by State Ministry of Science and Research.

The cause of the attractive force which has remained hidden to date involves chemical attractants which are secreted by the precursor cells. “In this way, the nerve precursor cells prevent the mature brain cells from penetrating further into the tissue,” says Dr. Philipp Koch, who performed the primary work for the study as an additional lead author, together with Dr. Ladewig.

The scientists had initially observed that, the more precursor cells contained in the transplant, the worse the migration of nerve cells is. In a second step, the researchers from the Institute for Reconstructive Neurobiology at Bonn University were able to decode and inactivate the attractants responsible for the agglomeration of mature and immature neurons. When the scientists deactivated the receptor tyrosine kinase ligands FGF2 and VEGF with inhibitors, mature nerve cells migrated better into the animal brains and dispersed over much larger areas.

Promising universal approach for transplants

“This is a promising new approach to solve an old problem in neurotransplantation,” Prof. Brüstle summarizes. Through the inhibition of attractants, the migration of implanted nerve precursor cells into the brain can be significantly improved. As the researchers have shown in various models with precursor cells from animals and humans, the mechanism is a fundamental principle which also functions across species. “However, more research is still needed to transfer the principle into clinical application,” says Prof. Brüstle.

(Source: www3.uni-bonn.de)

Filed under neurodegenerative diseases nerve cells precursor cells attractants neurotransplantation neuroscience science

307 notes

A critical theory in brain development
Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 
Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.
The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 
During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.
The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.
Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

A critical theory in brain development

Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 

Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.

The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 

During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.

The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.

Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

Filed under brain development synaptic plasticity neurotransmitters visual cortex vision neurons neuroscience science

242 notes

Carnegie Mellon Computer Searches Web 24/7 To Analyze Images and Teach Itself Common Sense

A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.

image

NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.

But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying — that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people — and NEIL — nevertheless know that sheep typically are white.

"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."

A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.

The public can now view NEIL’s findings at the project website, www.neil-kb.com.

The research team, including Xinlei Chen, a Ph.D. student in CMU’s Language Technologies Institute, and Abhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.

One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.

"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.

Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.

Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.

"People don’t always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."

People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for “apple” might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.

As its search proceeds, NEIL develops subcategories of objects — tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.

NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.

This research is supported by the Office of Naval Research and Google Inc.

Filed under computer vision machine learning object recgnition AI NEIL technology neuroscience science

free counters