Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

255 notes

How our brain networks: Research reveals white matter ‘scaffold’ of human brain 
For the first time, neuroscientists have systematically identified the white matter “scaffold” of the human brain, the critical communications network that supports brain function.
Their work, published Feb. 11 in the open-source journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease. By detailing the connections that have the greatest influence over all other connections, the researchers offer not only a landmark first map of core white matter pathways, but also show which connections may be most vulnerable to damage.
"We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function," said senior author John Darrell Van Horn of the USC Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging at USC.
"While all connections in the brain have their importance, there are particular links which are the major players," Van Horn said.
Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the USC Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.
They found that the most important areas of white and gray matter don’t always overlap. Gray matter is the outermost portion of the brain containing the neurons where information is processed and stored. Past research has identified the areas of gray matter that are disproportionately affected by injury.
But the current study shows that the most vulnerable white matter pathways – the core “scaffolding” – are not necessarily just the connections among the most vulnerable areas of gray matter, helping explain why seemingly small brain injuries may have such devastating effects.
"Sometimes people experience a head injury which seems severe but from which they are able to recover. On the other hand, some people have a seemingly small injury which has very serious clinical effects," says Van Horn, associate professor of neurology at the Keck School of Medicine of USC. "This research helps us to better address clinical challenges such as traumatic brain injury and to determine what makes certain white matter pathways particularly vulnerable and important."
The researchers compare their brain imaging analysis to models used for understanding social networks. To get a sense of how the brain works, Irimia and Van Horn did not focus only on the most prominent gray matter nodes – which are akin to the individuals within a social network. Nor did they merely look at how connected those nodes are.
Rather, they also examined the strength of these white matter connections, i.e. which connections seemed to be particularly sensitive or to cause the greatest repercussions across the network when removed. Those connections which created the greatest changes form the network “scaffold.”
"Just as when you remove the internet connection to your computer you won’t get your email anymore, there are white matter pathways which result in large scale communication failures in the brain when damaged," Van Horn said.
When white matter pathways are damaged, brain areas served by those connections may wither or have their functions taken over by other brain regions, the researchers explain. Irimia and Van Horn’s research on core white matter connections is part of a worldwide scientific effort to map the 100 billion neurons and 1,000 trillion connections in the living human brain, led by the Human Connectome Project and the Laboratory of Neuro Imaging at USC.
Irimia notes that, “these new findings on the brain’s network scaffold help inform clinicians about the neurological impacts of brain diseases such as multiple sclerosis, Alzheimer’s disease, as well as major brain injury. Sports organizations, the military and the US government have considerable interest in understanding brain disorders, and our work contributes to that of other scientists in this exciting era for brain research.”

How our brain networks: Research reveals white matter ‘scaffold’ of human brain

For the first time, neuroscientists have systematically identified the white matter “scaffold” of the human brain, the critical communications network that supports brain function.

Their work, published Feb. 11 in the open-source journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease. By detailing the connections that have the greatest influence over all other connections, the researchers offer not only a landmark first map of core white matter pathways, but also show which connections may be most vulnerable to damage.

"We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function," said senior author John Darrell Van Horn of the USC Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging at USC.

"While all connections in the brain have their importance, there are particular links which are the major players," Van Horn said.

Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the USC Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.

They found that the most important areas of white and gray matter don’t always overlap. Gray matter is the outermost portion of the brain containing the neurons where information is processed and stored. Past research has identified the areas of gray matter that are disproportionately affected by injury.

But the current study shows that the most vulnerable white matter pathways – the core “scaffolding” – are not necessarily just the connections among the most vulnerable areas of gray matter, helping explain why seemingly small brain injuries may have such devastating effects.

"Sometimes people experience a head injury which seems severe but from which they are able to recover. On the other hand, some people have a seemingly small injury which has very serious clinical effects," says Van Horn, associate professor of neurology at the Keck School of Medicine of USC. "This research helps us to better address clinical challenges such as traumatic brain injury and to determine what makes certain white matter pathways particularly vulnerable and important."

The researchers compare their brain imaging analysis to models used for understanding social networks. To get a sense of how the brain works, Irimia and Van Horn did not focus only on the most prominent gray matter nodes – which are akin to the individuals within a social network. Nor did they merely look at how connected those nodes are.

Rather, they also examined the strength of these white matter connections, i.e. which connections seemed to be particularly sensitive or to cause the greatest repercussions across the network when removed. Those connections which created the greatest changes form the network “scaffold.”

"Just as when you remove the internet connection to your computer you won’t get your email anymore, there are white matter pathways which result in large scale communication failures in the brain when damaged," Van Horn said.

When white matter pathways are damaged, brain areas served by those connections may wither or have their functions taken over by other brain regions, the researchers explain. Irimia and Van Horn’s research on core white matter connections is part of a worldwide scientific effort to map the 100 billion neurons and 1,000 trillion connections in the living human brain, led by the Human Connectome Project and the Laboratory of Neuro Imaging at USC.

Irimia notes that, “these new findings on the brain’s network scaffold help inform clinicians about the neurological impacts of brain diseases such as multiple sclerosis, Alzheimer’s disease, as well as major brain injury. Sports organizations, the military and the US government have considerable interest in understanding brain disorders, and our work contributes to that of other scientists in this exciting era for brain research.”

Filed under white matter TBI brain injury gray matter neuroimaging connectomics neuroscience science

139 notes

Brain Implants Hold Promise Restoring Combat Memory Loss
The Pentagon is exploring the development of implantable probes that may one day help reverse some memory loss caused by brain injury.
The goal of the project, still in early stages, is to treat some of the more than 280,000 troops who have suffered brain injuries since 2000, including in combat in Iraq and Afghanistan.
The Defense Advanced Research Projects Agency is focused on wounded veterans, though some research may benefit others such as seniors with dementia or athletes with brain injuries, said Geoff Ling, a physician and deputy director of Darpa’s Defense Sciences office. It’s still far from certain that such work will result in an anti-memory-loss device. Still, word of the project is creating excitement after more than a decade of failed attempts to develop drugs to treat brain injury and memory loss.
“The way human memory works is one of the great unsolved mysteries,” said Andres Lozano, chairman of neurosurgery at the University of Toronto. “This has tremendous value from a basic science aspect. It may have huge implications for patients with disorders affecting memory, including those with dementia and Alzheimer’s disease.”
At least 1.7 million people in the U.S. are diagnosed with memory loss each year, costing the nation’s economy more than $76 billion annually, according to the most recent federal health data. The Department of Veterans Affairs estimates it will spend $4.2 billion to care for former troops with brain injuries between fiscal 2013 and 2022.
Read more

Brain Implants Hold Promise Restoring Combat Memory Loss

The Pentagon is exploring the development of implantable probes that may one day help reverse some memory loss caused by brain injury.

The goal of the project, still in early stages, is to treat some of the more than 280,000 troops who have suffered brain injuries since 2000, including in combat in Iraq and Afghanistan.

The Defense Advanced Research Projects Agency is focused on wounded veterans, though some research may benefit others such as seniors with dementia or athletes with brain injuries, said Geoff Ling, a physician and deputy director of Darpa’s Defense Sciences office. It’s still far from certain that such work will result in an anti-memory-loss device. Still, word of the project is creating excitement after more than a decade of failed attempts to develop drugs to treat brain injury and memory loss.

“The way human memory works is one of the great unsolved mysteries,” said Andres Lozano, chairman of neurosurgery at the University of Toronto. “This has tremendous value from a basic science aspect. It may have huge implications for patients with disorders affecting memory, including those with dementia and Alzheimer’s disease.”

At least 1.7 million people in the U.S. are diagnosed with memory loss each year, costing the nation’s economy more than $76 billion annually, according to the most recent federal health data. The Department of Veterans Affairs estimates it will spend $4.2 billion to care for former troops with brain injuries between fiscal 2013 and 2022.

Read more

Filed under implants memory memory loss brain damage neuroscience science

106 notes

Huntington disease prevention trial shows creatine safe, suggests slowing of progression
The first clinical trial of a drug intended to delay the onset of symptoms of Huntington disease (HD) reveals that high-dose treatment with the nutritional supplement creatine was safe and well tolerated by most study participants. In addition, neuroimaging showed a treatment-associated slowing of regional brain atrophy, evidence that creatine might slow the progression of presymptomatic HD. The Massachusetts General Hospital (MGH) study also utilized a novel design that allowed participants – all of whom were at genetic risk for the neurodegenerative disorder – to enroll without having to learn whether or not they carried the mutation that causes HD.
"More than 90 percent of those in the United States who know they are at risk for HD because of their family history have abstained from genetic testing, often because they fear discrimination or don’t want to face the stress and anxiety of knowing they are destined to develop such a devastating disease," says H. Diana Rosas, MD, of the MassGeneral Institute for Neurodegenerative Disease (MGH-MIND), lead and corresponding author of the paper that will appear in the March 11 issue of Neurology and has been released online. “Many of these individuals would still like to help find treatments, and this trial design allows them to participate while respecting their autonomy, their right not to know their personal genetic information.”
Among the ways that the mutated form of the huntingtin protein damages brain cells is by interfering with cellular energy production, leading to a depletion of ATP, the molecule that powers most biological processes. Known to help restore ATP and maintain cellular energy, creatine is being investigated to treat a number of neurological conditions – including Parkinson disease, amyotrophic lateral sclerosis and spinal cord injury. Studies in mouse models of HD showed that creatine raises brain ATP levels and protects against neurodegeneration. Previous clinical trials of creatine in symptomatic HD patients have been limited in scale, involved daily doses of 10 grams or less, and did not provide evidence of potential efficacy. Based on the results of a pilot study at MGH that evaluated doses as high as 40 grams, participants in the current study received doses of up to 30 grams daily.
The phase II PRECREST trial enrolled 64 adult participants - 19 who knew they carried the mutated form of the HD gene and 45 with a 50 percent risk of having inherited the HD mutation. Genetic testing, results of which were made available only to the study statistician and not to study staff or participants, confirmed the genetic status of those who had previously been tested and revealed an additional 26 presymptomatic carriers of the mutated gene, for a total of 47 participants with presymptomatic HD and 17 controls.
For the first 6 months of the trial, participants were randomized into two groups, regardless of gene status. One group received twice-daily oral doses of creatine, up to a maximum of 30 grams per day, the other received placebo. After that first phase, all participants received creatine for an additional 12 months. Participants were assessed at regular study visits for adverse effects, and dosage levels were adjusted, if necessary, to reduce unpleasant side effects. Additional tests – cognitive assessments, measurement of blood markers and MRI brain scans – were conducted at the trial’s outset, at 6 months and at the end of the study period.
During the first phase of the trial more than three-quarters of those randomized to creatine tolerated a daily dose of 15 grams or more, and more than two -thirds tolerated the full 30-gram dose. Throughout the entire trial, a total of 15 participants – including several who knew they carried the HD mutation – discontinued taking creatine because of gastrointestinal discomfort, the taste of the drug, inconvenience, or the stress of being constantly reminded of their HD risk. Other than occasional diarrhea and nausea, few adverse events were associated with creatine.
In participants who carried the HD mutation, the MRI scans taken at the outset of the trial had revealed significant atrophy in regions of the cerebral cortex and basal ganglia known to be affected by the disease. Followup MRI scans at six months showed a slower rate of atrophy in participants taking creatine compared to those on placebo. At the end of the second phase, the rate of brain atrophy had also slowed in presymptomatic participants that started taking creatine after 6 months on placebo.
In addition to suggesting that creatine could slow the progression of HD, these results also imply that neuroimaging may provide a useful biomarker of disease modification in studies of other potential treatments. While participants with the mutation had performed less well than controls on the cognitive tests at the study outset, creatine treatment had no significant effect on those measures, possibly because the tests were not sensitive enough to detect subtle changes that might occur during such a brief time period, the authors note.
"The results of this trial suggest that the prevention or delay of HD symptoms is feasible, that at-risk individuals can participate in clinical trials – even if they do not want to learn their genetic status – and that useful biomarkers can be developed to help assess therapeutic benefits," says senior author Steven Hersch, MD, PhD, of MGH-MIND. "In addition, we believe our study design sets an important precedent for other genetic diseases and will help inform discussions of how clinical research can coexist with deep concerns about genetic privacy and patient autonomy."

Huntington disease prevention trial shows creatine safe, suggests slowing of progression

The first clinical trial of a drug intended to delay the onset of symptoms of Huntington disease (HD) reveals that high-dose treatment with the nutritional supplement creatine was safe and well tolerated by most study participants. In addition, neuroimaging showed a treatment-associated slowing of regional brain atrophy, evidence that creatine might slow the progression of presymptomatic HD. The Massachusetts General Hospital (MGH) study also utilized a novel design that allowed participants – all of whom were at genetic risk for the neurodegenerative disorder – to enroll without having to learn whether or not they carried the mutation that causes HD.

"More than 90 percent of those in the United States who know they are at risk for HD because of their family history have abstained from genetic testing, often because they fear discrimination or don’t want to face the stress and anxiety of knowing they are destined to develop such a devastating disease," says H. Diana Rosas, MD, of the MassGeneral Institute for Neurodegenerative Disease (MGH-MIND), lead and corresponding author of the paper that will appear in the March 11 issue of Neurology and has been released online. “Many of these individuals would still like to help find treatments, and this trial design allows them to participate while respecting their autonomy, their right not to know their personal genetic information.”

Among the ways that the mutated form of the huntingtin protein damages brain cells is by interfering with cellular energy production, leading to a depletion of ATP, the molecule that powers most biological processes. Known to help restore ATP and maintain cellular energy, creatine is being investigated to treat a number of neurological conditions – including Parkinson disease, amyotrophic lateral sclerosis and spinal cord injury. Studies in mouse models of HD showed that creatine raises brain ATP levels and protects against neurodegeneration. Previous clinical trials of creatine in symptomatic HD patients have been limited in scale, involved daily doses of 10 grams or less, and did not provide evidence of potential efficacy. Based on the results of a pilot study at MGH that evaluated doses as high as 40 grams, participants in the current study received doses of up to 30 grams daily.

The phase II PRECREST trial enrolled 64 adult participants - 19 who knew they carried the mutated form of the HD gene and 45 with a 50 percent risk of having inherited the HD mutation. Genetic testing, results of which were made available only to the study statistician and not to study staff or participants, confirmed the genetic status of those who had previously been tested and revealed an additional 26 presymptomatic carriers of the mutated gene, for a total of 47 participants with presymptomatic HD and 17 controls.

For the first 6 months of the trial, participants were randomized into two groups, regardless of gene status. One group received twice-daily oral doses of creatine, up to a maximum of 30 grams per day, the other received placebo. After that first phase, all participants received creatine for an additional 12 months. Participants were assessed at regular study visits for adverse effects, and dosage levels were adjusted, if necessary, to reduce unpleasant side effects. Additional tests – cognitive assessments, measurement of blood markers and MRI brain scans – were conducted at the trial’s outset, at 6 months and at the end of the study period.

During the first phase of the trial more than three-quarters of those randomized to creatine tolerated a daily dose of 15 grams or more, and more than two -thirds tolerated the full 30-gram dose. Throughout the entire trial, a total of 15 participants – including several who knew they carried the HD mutation – discontinued taking creatine because of gastrointestinal discomfort, the taste of the drug, inconvenience, or the stress of being constantly reminded of their HD risk. Other than occasional diarrhea and nausea, few adverse events were associated with creatine.

In participants who carried the HD mutation, the MRI scans taken at the outset of the trial had revealed significant atrophy in regions of the cerebral cortex and basal ganglia known to be affected by the disease. Followup MRI scans at six months showed a slower rate of atrophy in participants taking creatine compared to those on placebo. At the end of the second phase, the rate of brain atrophy had also slowed in presymptomatic participants that started taking creatine after 6 months on placebo.

In addition to suggesting that creatine could slow the progression of HD, these results also imply that neuroimaging may provide a useful biomarker of disease modification in studies of other potential treatments. While participants with the mutation had performed less well than controls on the cognitive tests at the study outset, creatine treatment had no significant effect on those measures, possibly because the tests were not sensitive enough to detect subtle changes that might occur during such a brief time period, the authors note.

"The results of this trial suggest that the prevention or delay of HD symptoms is feasible, that at-risk individuals can participate in clinical trials – even if they do not want to learn their genetic status – and that useful biomarkers can be developed to help assess therapeutic benefits," says senior author Steven Hersch, MD, PhD, of MGH-MIND. "In addition, we believe our study design sets an important precedent for other genetic diseases and will help inform discussions of how clinical research can coexist with deep concerns about genetic privacy and patient autonomy."

Filed under huntington disease creatine brain atrophy neurodegenerative diseases huntingtin neuroscience science

101 notes

Optogenetic toolkit goes multicolor
Optogenetics is a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins. Within the past decade, it has become a very powerful tool for discovering the functions of different types of cells in the brain.
Most of these light-sensitive proteins, known as opsins, respond to light in the blue-green range. Now, a team led by MIT has discovered an opsin that is sensitive to red light, which allows researchers to independently control the activity of two populations of neurons at once, enabling much more complex studies of brain function.
“If you want to see how two different sets of cells interact, or how two populations of the same cell compete against each other, you need to be able to activate those populations independently,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and a senior author of the new study.
The new opsin is one of about 60 light-sensitive proteins found in a screen of 120 species of algae. The study, which appears in the Feb. 9 online edition of Nature Methods, also yielded the fastest opsin, enabling researchers to study neuron activity patterns with millisecond timescale precision.
Boyden and Gane Ka-Shu Wong, a professor of medicine and biological sciences at the University of Alberta, are the paper’s senior authors, and the lead author is MIT postdoc Nathan Klapoetke. Researchers from the Howard Hughes Medical Institute’s Janelia Farm Research Campus, the University of Pennsylvania, the University of Cologne, and the Beijing Genomics Institute also contributed to the study.
Read more

Optogenetic toolkit goes multicolor

Optogenetics is a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins. Within the past decade, it has become a very powerful tool for discovering the functions of different types of cells in the brain.

Most of these light-sensitive proteins, known as opsins, respond to light in the blue-green range. Now, a team led by MIT has discovered an opsin that is sensitive to red light, which allows researchers to independently control the activity of two populations of neurons at once, enabling much more complex studies of brain function.

“If you want to see how two different sets of cells interact, or how two populations of the same cell compete against each other, you need to be able to activate those populations independently,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and a senior author of the new study.

The new opsin is one of about 60 light-sensitive proteins found in a screen of 120 species of algae. The study, which appears in the Feb. 9 online edition of Nature Methods, also yielded the fastest opsin, enabling researchers to study neuron activity patterns with millisecond timescale precision.

Boyden and Gane Ka-Shu Wong, a professor of medicine and biological sciences at the University of Alberta, are the paper’s senior authors, and the lead author is MIT postdoc Nathan Klapoetke. Researchers from the Howard Hughes Medical Institute’s Janelia Farm Research Campus, the University of Pennsylvania, the University of Cologne, and the Beijing Genomics Institute also contributed to the study.

Read more

Filed under optogenetics opsins brain cells neuroscience science

107 notes

Finding could explain age-related decline in motor function

Scientists from the School of Medicine at The University of Texas Health Science Center at San Antonio have found a clue as to why muscles weaken with age. In a study published today in The Journal of Neuroscience, they report the first evidence that “set points” in the nervous system are not inalterably determined during development but instead can be reset with age. They observed a change in set point that resulted in significantly diminished motor function in aging fruit flies.

“The body has a set point for temperature (98.6 degrees), a set point for salt level in the blood, and other homeostatic (steady-state) set points that are important for maintaining stable functions throughout life,” said study senior author Ben Eaton, Ph.D., assistant professor of physiology at the Health Science Center. “Evidence also points to the existence of set points in the nervous system, but it has never been observed that they change, until now.”

Dr. Eaton and lead author Rebekah Mahoney, a graduate student, recorded changes in the neuromuscular junction synapses of aging fruit flies. These synapses are spaces where neurons exchange electrical signals to enable motor functions such as walking and smiling. “We observed a change in the synapse, indicating that the homeostatic mechanism had adjusted to maintain a new set point in the older animal,” Mahoney said.

The change was nearly 200 percent, and the researchers predicted that it would leave muscles more vulnerable to exhaustion.

Aside from impairing movement in aging animals, a new functional set point in neuromuscular junctions could put the synapse at risk for developing neurodegeneration — the hallmark of disorders such as Alzheimer’s and Parkinson’s diseases, Mahoney said.

“Observing a change in the set point in synapses alters our paradigms about how we think age affects the function of the nervous system,” she said.

It appears that a similar change could lead to effects on learning and memory in old age. An understanding of this phenomenon would be invaluable and could lead to development of novel therapies for those issues, as well.

(Source: uthscsa.edu)

Filed under fruit flies neurodegeneration motor function aging neuroscience science

148 notes

Image caption: New details about how motor neurons die in ALS have been uncovered by a new cell-culture system that combines spinal cord or brain cells from ALS patients with human motor neurons. The culture system shows that patient astrocytes (shown here with a blue-stained nucleus) release a toxin that kills motor neurons via a recently discovered process described as a “controlled cellular explosion.” Image: Diane Re.
Toxin from Brain Cells Triggers Neuron Loss in Human ALS Model
In most cases of amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, a toxin released by cells that normally nurture neurons in the brain and spinal cord can trigger loss of the nerve cells affected in the disease, Columbia researchers reported today in the online edition of the journal Neuron.
The toxin is produced by star-shaped cells called astrocytes and kills nearby motor neurons. In ALS, the death of motor neurons causes a loss of control over muscles required for movement, breathing, and swallowing. Paralysis and death usually occur within 3 years of the appearance of first symptoms.
The report follows the researchers’ previous study, which found similar results in mice with a rare, genetic form of the disease, as well as in a separate study from another group that used astrocytes derived from patient neural progenitor cells. The current study shows that the toxins are also present in astrocytes taken directly from ALS patients.
“I think this is probably the best evidence we can get that what we see in mouse models of the disease is also happening in human patients,” said the study’s senior author, Serge Przedborski, MD, PhD, the Page and William Black Professor of Neurology (in Pathology and Cell Biology), Vice Chair for Research in the department of Neurology, and co-director of Columbia’s Motor Neuron Center.
The findings also are significant because they apply to the most common form of ALS, which affects about 90 percent of patients. Scientists do not know why ALS develops in these patients; the other 10 percent of patients carry one of 27 genes known to cause the disease.
“Now that we know that the toxin is common to most patients, it gives us an impetus to track down this factor and learn how it kills the motor neurons,” Dr. Przedborski said. “Its identification has the potential to reveal new ways to slow down or stop the destruction of the motor neurons.”
In the study, Dr. Przedborski and study co-authors Diane Re, PhD, and Virginia Le Verche, PhD, associate research scientists, removed astrocytes from the brain and spinal cords of six ALS patients shortly after death and placed the cells in petri dishes next to healthy motor neurons. Because motor neurons cannot be removed from human subjects, they had been generated from human embryonic stem cells in the Project A.L.S./Jenifer Estess Laboratory for Stem Cell Research, also at CUMC.
Within two weeks, many of the motor neurons had shrunk and their cell membranes had disintegrated; about half of the motor neurons in the dish had died. Astrocytes removed from people who died from causes other than ALS had no effect on the motor neurons. Nor did other types of cells taken from ALS patients.
The researchers confirmed that the cause of the motor neurons’ death was a toxin released into the environment by immersing healthy motor neurons in the astrocytes’ culture media. The presence of the media, even without astrocytes, killed the motor neurons.
 How the Toxin Triggers Motor Neuron Death
The researchers have not yet identified the toxin released by the astrocytes. But they did discover the nature of the neuronal death process triggered by the toxin.The toxin triggers a biochemical cascade in the motor neurons that essentially causes them to undergo a controlled cellular explosion.
Drs. Przedborski, Re, and Le Verche found that they could prevent astrocyte-triggered motor neuron death by inhibiting one of the key components of this molecular cascade.
These findings may lead to a way to prevent motor neuron death in patients and potentially prolong life. But the therapeutic potential of such inhibition is far from clear. “For example, we don’t know if this would leave patients with living but dysfunctional neurons,” Dr. Przedborski said. The researchers are now testing the idea of inhibition in animal models of ALS.
New Human Cell Model of ALS Will Speed Identification of Potential Therapies 
The development of new therapies for ALS has been disappointing, with more than 30 clinical trials ending with no new treatments since the 1995 FDA approval of riluzole.
The lack of progress may be partly because animal models used to study ALS do not completely recreate the human disease. The new all-human cell model of ALS created for the current study may improve scientists’ ability to identify useful drug targets, particularly for the most common form of the disease.
“Although there are many neurodegenerative disorders, only for a handful do we have access to a simplified model that is relevant to the disease and can therefore potentially be used for high-throughput drug screening. So this model is quite special,” Dr. Przedborski said. “Here we have a spontaneous disease phenotype triggered by the relevant tissue that causes human illness. That’s one important thing. The other important thing is that this model is derived entirely from human elements. This is probably the closest, most natural model of human ALS that we can get in a dish.”

Image caption: New details about how motor neurons die in ALS have been uncovered by a new cell-culture system that combines spinal cord or brain cells from ALS patients with human motor neurons. The culture system shows that patient astrocytes (shown here with a blue-stained nucleus) release a toxin that kills motor neurons via a recently discovered process described as a “controlled cellular explosion.” Image: Diane Re.

Toxin from Brain Cells Triggers Neuron Loss in Human ALS Model

In most cases of amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, a toxin released by cells that normally nurture neurons in the brain and spinal cord can trigger loss of the nerve cells affected in the disease, Columbia researchers reported today in the online edition of the journal Neuron.

The toxin is produced by star-shaped cells called astrocytes and kills nearby motor neurons. In ALS, the death of motor neurons causes a loss of control over muscles required for movement, breathing, and swallowing. Paralysis and death usually occur within 3 years of the appearance of first symptoms.

The report follows the researchers’ previous study, which found similar results in mice with a rare, genetic form of the disease, as well as in a separate study from another group that used astrocytes derived from patient neural progenitor cells. The current study shows that the toxins are also present in astrocytes taken directly from ALS patients.

“I think this is probably the best evidence we can get that what we see in mouse models of the disease is also happening in human patients,” said the study’s senior author, Serge Przedborski, MD, PhD, the Page and William Black Professor of Neurology (in Pathology and Cell Biology), Vice Chair for Research in the department of Neurology, and co-director of Columbia’s Motor Neuron Center.

The findings also are significant because they apply to the most common form of ALS, which affects about 90 percent of patients. Scientists do not know why ALS develops in these patients; the other 10 percent of patients carry one of 27 genes known to cause the disease.

“Now that we know that the toxin is common to most patients, it gives us an impetus to track down this factor and learn how it kills the motor neurons,” Dr. Przedborski said. “Its identification has the potential to reveal new ways to slow down or stop the destruction of the motor neurons.”

In the study, Dr. Przedborski and study co-authors Diane Re, PhD, and Virginia Le Verche, PhD, associate research scientists, removed astrocytes from the brain and spinal cords of six ALS patients shortly after death and placed the cells in petri dishes next to healthy motor neurons. Because motor neurons cannot be removed from human subjects, they had been generated from human embryonic stem cells in the Project A.L.S./Jenifer Estess Laboratory for Stem Cell Research, also at CUMC.

Within two weeks, many of the motor neurons had shrunk and their cell membranes had disintegrated; about half of the motor neurons in the dish had died. Astrocytes removed from people who died from causes other than ALS had no effect on the motor neurons. Nor did other types of cells taken from ALS patients.

The researchers confirmed that the cause of the motor neurons’ death was a toxin released into the environment by immersing healthy motor neurons in the astrocytes’ culture media. The presence of the media, even without astrocytes, killed the motor neurons.

 How the Toxin Triggers Motor Neuron Death

The researchers have not yet identified the toxin released by the astrocytes. But they did discover the nature of the neuronal death process triggered by the toxin.The toxin triggers a biochemical cascade in the motor neurons that essentially causes them to undergo a controlled cellular explosion.

Drs. Przedborski, Re, and Le Verche found that they could prevent astrocyte-triggered motor neuron death by inhibiting one of the key components of this molecular cascade.

These findings may lead to a way to prevent motor neuron death in patients and potentially prolong life. But the therapeutic potential of such inhibition is far from clear. “For example, we don’t know if this would leave patients with living but dysfunctional neurons,” Dr. Przedborski said. The researchers are now testing the idea of inhibition in animal models of ALS.

New Human Cell Model of ALS Will Speed Identification of Potential Therapies

The development of new therapies for ALS has been disappointing, with more than 30 clinical trials ending with no new treatments since the 1995 FDA approval of riluzole.

The lack of progress may be partly because animal models used to study ALS do not completely recreate the human disease. The new all-human cell model of ALS created for the current study may improve scientists’ ability to identify useful drug targets, particularly for the most common form of the disease.

“Although there are many neurodegenerative disorders, only for a handful do we have access to a simplified model that is relevant to the disease and can therefore potentially be used for high-throughput drug screening. So this model is quite special,” Dr. Przedborski said. “Here we have a spontaneous disease phenotype triggered by the relevant tissue that causes human illness. That’s one important thing. The other important thing is that this model is derived entirely from human elements. This is probably the closest, most natural model of human ALS that we can get in a dish.”

Filed under ALS neurons neurodegenerative disorders astrocytes motor neurons neuroscience science

112 notes

Gender influences symptoms of genetic disorder

A genetic disorder that affects about 1 in every 2,500 births can cause a bewildering array of clinical problems, including brain tumors, impaired vision, learning disabilities, behavioral problems, heart defects and bone deformities. The symptoms and their severity vary among patients affected by this condition, known as neurofibromatosis type 1 (NF1).

image

Image caption: A mutation in the gene that causes a human condition, neurofibromatosis type 1 (NF1), leads to shorter nerve cell branches (right) in the back of the eyes of female mice. The shorter branches, not seen in male mice with the mutation, make the cells more vulnerable. This may explain why girls with NF1 are more at risk of vision loss from brain tumors. (Credit: David H. Gutmann)

Now, researchers at Washington University School of Medicine in St. Louis have identified a patient’s gender as a clear and simple guidepost to help health-care providers anticipate some of the effects of NF1. The scientists report that girls with NF1 are at greater risk of vision loss from brain tumors. They also identified gender-linked differences in male mice that may help explain why boys with NF1 are more vulnerable to learning disabilities.

“This information will help us adjust our strategies for predicting the potential outcomes in patients with NF1 and recommending appropriate treatments,” said David H. Gutmann, MD, PhD, the Donald O. Schnuck Family Professor of Neurology, who treats NF1 patients at St. Louis Children’s Hospital.

The findings appear online in the Annals of Neurology.

Kelly Diggs-Andrews, PhD, a postdoctoral research associate in Gutmann’s laboratory, reviewed NF1 patient data collected at the Washington University Neurofibromatosis (NF) Center. In her initial assessment, Diggs-Andrews found that the number of boys and girls was almost equal in  a group of nearly 100 NF1 patients who had developed brain tumors known as optic gliomas. But vision loss occurred three times more often in girls with these tumors.

With help from David Wozniak, PhD, research professor of psychiatry, the scientists looked for an explanation in Nf1 mice (which, like NF1 patients, have a mutation in their Nf1 gene). They found that more nerve cells died in the eyes of female mice, and they linked the increased cell death to low levels of cyclic AMP, a chemical messenger that plays important roles in nerve function and health in the brain. In addition, Wozniak discovered that only female Nf1 mice had reduced vision, paralleling what was observed in children with NF1.

Two previous studies have shown that boys with NF1 are at higher risk of learning disorders than girls, including spatial learning and memory problems. To look for the causes of this gender-related difference, the scientists first confirmed that Nf1 mice had learning problems by testing the ability of the mice to find a hidden platform after training. After multiple trials, female Nf1 mice quickly found the hidden platform. In striking contrast, the male Nf1 mice did not, revealing that they had deficits in spatial learning and memory.

When the researchers examined the brain regions involved in learning and memory in the Nf1 mice, they identified biochemical abnormalities in the males but not in the females.

“We’re currently working to determine whether differences in the sex hormones are responsible for these abnormalities in vision and memory,” Gutmann said. “We’re talking about a disorder in young kids and in mice, where we normally would not expect sex hormones to play a major role, but we can’t rule them out yet.”

If hormones are responsible for these gender-linked distinctions in NF1, treatments that block hormonal function may be an option for use in patients with NF1, Gutmann added. 

“Moreover, these studies identify sex as one important factor that helps to predict clinical outcomes, such as vision loss and problems in cognitive function, in children with NF1,” Gutmann said. “Further understanding of the interplay between sex and NF1 may change the way we manage individuals with this common brain tumor predisposition syndrome.”

(Source: news.wustl.edu)

Filed under neurofibromatosis neurofibromatosis type 1 genetic disorders gender neuroscience science

92 notes

Image caption: An image of the left and right sided habenular nuclei of larval zebrafish showing left/right structural asymmetries in the processes of neurons (pink) and their connections (blue). (Credit: Ana Faro/Tom Hawkins/Steve Wilson/UCL)
Brain asymmetry improves processing of sensory information
Fish that have symmetric brains show defects in processing information about sights and smells, according to the results of a new study into how asymmetry in the brain affects processing of sensory information.
It’s widely believed that the left and right sides of the brain have slightly different roles in cognition and in regulating behaviour. However, scientists don’t know whether these asymmetries actually matter for the efficient functioning of the brain.
Now, a team from UCL and KU Leuven, Belgium has shown that, in zebrafish at least, loss of brain asymmetry can have significant consequences on sensory processing, raising the possibility that defects in the development of brain functions on either the left or right on the brain could cause cognitive dysfunction. The study is published today in Current Biology.
Professor Steve Wilson, senior author of the study from the UCL Department of Cell & Developmental Biology, said: “We don’t know whether asymmetries actually matter for the efficient functioning of the brain. For instance, if your brain was symmetric, would it work any less well than it normally does?
“This is potentially an important issue as brain-imaging studies in various neurological conditions have shown alterations in normally asymmetric patterns of neuronal activity.”
In their study the team used two-photon high resolution microscopy to image the activity of individual neurons in a part of the brain called the habenulae in larval zebrafish. This region of the brain shows asymmetries in many different vertebrates and is involved in mediating addiction, fear and reward pathways and probably influences numerous behaviour patterns.
In zebrafish habenulae most neurons responding to a light stimulus are on the left whereas most responding to odour are on the right. Using this knowledge to their advantage, scientists bred fish in which habenular asymmetry was reversed and fish with double-right and double-left sided habenulae. They then asked how the habenular neurons responded to visual or olfactory stimuli in these different conditions. 
They found that if the direction of brain asymmetry was reversed, the functional properties of the habenular neurons were also reversed, whereas double-left and double-right sided brains almost completely lacked habenular responsiveness to odour or light respectively.
Dr Elena Dreosti, first author of the study, also from UCL Department of Cell & Developmental Biology, said: “These results show that loss of brain asymmetry can have significant consequences upon sensory processing and circuit function”.
The research raises the possibility that defects in the establishment of brain lateralization could indeed be causative of cognitive or other symptoms of brain dysfunction.

Image caption: An image of the left and right sided habenular nuclei of larval zebrafish showing left/right structural asymmetries in the processes of neurons (pink) and their connections (blue). (Credit: Ana Faro/Tom Hawkins/Steve Wilson/UCL)

Brain asymmetry improves processing of sensory information

Fish that have symmetric brains show defects in processing information about sights and smells, according to the results of a new study into how asymmetry in the brain affects processing of sensory information.

It’s widely believed that the left and right sides of the brain have slightly different roles in cognition and in regulating behaviour. However, scientists don’t know whether these asymmetries actually matter for the efficient functioning of the brain.

Now, a team from UCL and KU Leuven, Belgium has shown that, in zebrafish at least, loss of brain asymmetry can have significant consequences on sensory processing, raising the possibility that defects in the development of brain functions on either the left or right on the brain could cause cognitive dysfunction. The study is published today in Current Biology.

Professor Steve Wilson, senior author of the study from the UCL Department of Cell & Developmental Biology, said: “We don’t know whether asymmetries actually matter for the efficient functioning of the brain. For instance, if your brain was symmetric, would it work any less well than it normally does?

“This is potentially an important issue as brain-imaging studies in various neurological conditions have shown alterations in normally asymmetric patterns of neuronal activity.”

In their study the team used two-photon high resolution microscopy to image the activity of individual neurons in a part of the brain called the habenulae in larval zebrafish. This region of the brain shows asymmetries in many different vertebrates and is involved in mediating addiction, fear and reward pathways and probably influences numerous behaviour patterns.

In zebrafish habenulae most neurons responding to a light stimulus are on the left whereas most responding to odour are on the right. Using this knowledge to their advantage, scientists bred fish in which habenular asymmetry was reversed and fish with double-right and double-left sided habenulae. They then asked how the habenular neurons responded to visual or olfactory stimuli in these different conditions. 

They found that if the direction of brain asymmetry was reversed, the functional properties of the habenular neurons were also reversed, whereas double-left and double-right sided brains almost completely lacked habenular responsiveness to odour or light respectively.

Dr Elena Dreosti, first author of the study, also from UCL Department of Cell & Developmental Biology, said: “These results show that loss of brain asymmetry can have significant consequences upon sensory processing and circuit function”.

The research raises the possibility that defects in the establishment of brain lateralization could indeed be causative of cognitive or other symptoms of brain dysfunction.

Filed under zebrafish neural activity brain lateralization brain asymmetry neuroscience science

176 notes

Study provides surprising new clue to the roots of hunger

While the function of eating is to nourish the body, this is not what actually compels us to seek out food. Instead, it is hunger, with its stomach-growling sensations and gnawing pangs that propels us to the refrigerator – or the deli or the vending machine. Although hunger is essential for survival, abnormal hunger can lead to obesity and eating disorders, widespread problems now reaching near-epidemic proportions around the world.

Over the past 20 years, Beth Israel Deaconess Medical Center (BIDMC) neuroendocrinologist Bradford Lowell, MD, PhD, has been untangling the complicated jumble of neurocircuits in the brain that underlie hunger, working to create a wiring diagram to explain the origins of this intense motivational state. Key among his findings has been the discovery that Agouti-peptide (AgRP) expressing neurons – a group of nerve cells in the brain’s hypothalamus – are activated by caloric deficiency, and when either naturally or artificially stimulated in animal models, will cause mice to eat voraciously after conducting a relentless search for food.

Now, in a new study published on-line this week in the journal Nature, Lowell’s lab has made the surprising discovery that the hunger-inducing neurons that activate these AgRP neurons are located in the paraventricular nucleus — a brain region long thought to cause satiety, or feelings of fullness. This unexpected finding not only provides a critical addition to the overall wiring diagram, but adds an important extension to our understanding of what drives appetite.

"Our goal is to understand how the brain controls hunger," explains Lowell, an investigator in BIDMC’s Division of Endocrinology, Diabetes and Metabolism and Professor of Medicine at Harvard Medical School. "Abnormal hunger can lead to obesity and eating disorders, but in order to understand what might be wrong – and how to treat it – you first need to know how it works. Otherwise, it’s like trying to fix a car without knowing how the engine operates."

Hunger is notoriously complicated and questions abound: Why do the fed and fasted states of your body increase or decrease hunger? And how do the brain’s reward pathways come into play – why, as we seek out food, especially after an otherwise complete meal, do we prefer ice cream to lettuce?

"Psychologists have explained how cues from the environment and from the body interact, demonstrating that food and stimuli linked with food [such as a McDonald’s sign] are rewarding and therefore promote hunger," explains Lowell. "It’s clear that fasting increases the gain on how rewarding we find food to be, while a full stomach decreases this reward. But while this model has been extremely important in understanding the general features of the ‘hunger system,’ it’s told us nothing about what’s inside the ‘black box’ – the brain’s neural circuits that actually control hunger."

To deal with this particularly complex brain region – a dense and daunting tangle of circuits resembling a wildly colorful Jackson Pollack painting – the Lowell team is taking a step-by-step approach to find out how the messages indicating whether the body is in a state of feeding or fasting enter this system. Their search has been aided by a number of extremely powerful technologies, including rabies circuit mapping and channelrhodopsin-assisted circuit mapping, which enable their highly specific, neuron-by-neuron analysis of the region.

"By making use of these new technologies, we are able to follow the synapses, follow the axons, and see how it all works," says Lowell. "While this sounds like a relatively straightforward concept, it’s actually been a huge challenge for the neuroscience field."

In this new paper, first authors Michael Krashes, PhD, and Bhavik Shah, PhD, postdoctoral fellows in the Lowell lab, employed rabies circuit mapping, a technology in which a modified version of the rabies virus is engineered to “infect” just one type of neuron – in this case, the AgRP neurons that drive hunger. The virus moves upstream one synapse and identifies all neurons that are providing input to AgRP starter neurons. Then, using a host of different neuron-specific cre-recombinase expressing mice (a group of genetically engineered animals originally developed in the Lowell lab) the investigators were able to map inputs to just these nerve cells, and then manipulate these upstream neurons so that they could be targeted for activation by an external stimulus.

"We wanted to know, of all the millions of neurons in a mouse brain, which provided input to the AgRP neurons," explains Lowell. "And the shocking result was that there were only two sites in the brain that were involved – the dorsal medial hypothalamus and the paraventricular nucleus, with the input from the paraventricular neurons shown to be extremely strong."

With this new information, the investigators now had a model to pursue. “We hypothesized that neurons in the paraventricular nucleus were communicating with and turning on the AgRP neurons. We developed mice that expressed cre-recombinase in many subsets of the paraventricular neurons and then, mapping the neurons one-by-one, we determined which was talking to which,” says Lowell. Their results revealed that subsets of neurons expressing thyrotropin-releasing hormone (TRH) and pituitary adenylate cylcase-activating polypeptide (PACAP) were in on the neuronal chatter.

Finally, through a chemogenetic technique known as DREADDs – Designer Receptor Exclusively Activated by Designer Drug – the authors used chemicals to specifically and selectively stimulate or inhibit these upstream neurons in the animal models. The fed mice, which had already consumed their daily meal and otherwise had no interest in food, proceeded to search out and voraciously eat after DREADD stimulation. Conversely, the fasting mice – which should have been hungry after a period of no food – ate very little when these upstream neurons were turned off.

"This has led us to the discovery of a novel, previously unknown means of activating AgRP neurons and producing hunger," explains Lowell. "Surprisingly, these hunger-inducing neurons were found in a region of the brain which has long been thought to have the opposite effect – causing satiety. This unexpected discovery, made possible only through the use of the new wiring diagram-elucidating technologies, highlights the importance of following the labeled neuronal lines of information flow. We are getting closer and closer to completing our wiring diagram, and the nearer we come to understanding how it all works, the better our chances of being able to treat obesity and eating disorders, the consequences of abnormal hunger."

(Source: eurekalert.org)

Filed under hunger AgRP neurons eating disorders hypothalamus neuroscience science

143 notes

Computer models help decode cells that sense light without seeing 
Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.
For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).
The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”
“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”
For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.
The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.
Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.
“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”
For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.
The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.
“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”

Computer models help decode cells that sense light without seeing

Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.

For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).

The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”

“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”

For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.

The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.

Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.

“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”

For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.

The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.

“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”

Filed under circadian rhythms retina photoreceptors vision AI technology neuroscience science

free counters