Neuroscience

Articles and news from the latest research reports.

47 notes

Targeting an aspect of Down syndrome
University of Michigan researchers have determined how a gene that is known to be defective in Down syndrome is regulated and how its dysregulation may lead to neurological defects, providing insights into potential therapeutic approaches to an aspect of the syndrome.
Normally, nerve cells called neurons undergo an intense period of extending and branching of neuronal protrusions around the time of birth. During this period, the neurons produce the proteins of the gene called Down syndrome cell-adhesion molecule, or Dscam, at high levels. After this phase, the growth and the levels of protein taper off.
However, in the brains of patients with Down syndrome, epilepsy and several other neurological disorders, the amount of Dscam remains high. The impact of the elevated Dscam amount on how neurons develop is unknown.
Bing Ye, a faculty member at U-M’s Life Sciences Institute, found that in the fruit fly Drosophila, the amount of Dscam proteins in a neuron determines the size to which a neuron extends its protrusions before it forms connections with other nerve cells. An overproduction of Dscam proteins leads to abnormally large neuronal protrusions.
Ye also identified two molecular pathways that converge to regulate the abundance of Dscam. One, dual leucine zipper kinase (DLK), which is involved in nerve regeneration, promotes the synthesis of Dscam proteins. Another, fragile X mental retardation protein (FMRP), which causes fragile X syndrome when defective, represses Dscam protein synthesis. Because humans share these genes with Drosophila, the DLK-FMRP-Dscam relationship presents a possible target for therapeutic intervention, Ye said.
Many genes are involved in neurological disorders like Down syndrome, and how molecular defects cause the disease is complex.
"But because of the important roles of Dscam in the development of neurons, its related defect is very likely to be an aspect of Down syndrome and it may be an aspect of the syndrome that can be treated," said Ye, an assistant professor in the Department of Cell and Developmental Biology at the U-M Medical School.
Ye’s next step is to test the effects of overexpression of Dscam in mice to see how it changes the development of the nervous system and the behavior of the animal.
Down syndrome occurs in about one in 830 newborns; an estimated 250,000 people in the U.S. have the condition, according to the National Library of Medicine’s Genetics Home Reference.

Targeting an aspect of Down syndrome

University of Michigan researchers have determined how a gene that is known to be defective in Down syndrome is regulated and how its dysregulation may lead to neurological defects, providing insights into potential therapeutic approaches to an aspect of the syndrome.

Normally, nerve cells called neurons undergo an intense period of extending and branching of neuronal protrusions around the time of birth. During this period, the neurons produce the proteins of the gene called Down syndrome cell-adhesion molecule, or Dscam, at high levels. After this phase, the growth and the levels of protein taper off.

However, in the brains of patients with Down syndrome, epilepsy and several other neurological disorders, the amount of Dscam remains high. The impact of the elevated Dscam amount on how neurons develop is unknown.

Bing Ye, a faculty member at U-M’s Life Sciences Institute, found that in the fruit fly Drosophila, the amount of Dscam proteins in a neuron determines the size to which a neuron extends its protrusions before it forms connections with other nerve cells. An overproduction of Dscam proteins leads to abnormally large neuronal protrusions.

Ye also identified two molecular pathways that converge to regulate the abundance of Dscam. One, dual leucine zipper kinase (DLK), which is involved in nerve regeneration, promotes the synthesis of Dscam proteins. Another, fragile X mental retardation protein (FMRP), which causes fragile X syndrome when defective, represses Dscam protein synthesis. Because humans share these genes with Drosophila, the DLK-FMRP-Dscam relationship presents a possible target for therapeutic intervention, Ye said.

Many genes are involved in neurological disorders like Down syndrome, and how molecular defects cause the disease is complex.

"But because of the important roles of Dscam in the development of neurons, its related defect is very likely to be an aspect of Down syndrome and it may be an aspect of the syndrome that can be treated," said Ye, an assistant professor in the Department of Cell and Developmental Biology at the U-M Medical School.

Ye’s next step is to test the effects of overexpression of Dscam in mice to see how it changes the development of the nervous system and the behavior of the animal.

Down syndrome occurs in about one in 830 newborns; an estimated 250,000 people in the U.S. have the condition, according to the National Library of Medicine’s Genetics Home Reference.

Filed under fruit flies nerve cells nerve regeneration down syndrome dscam proteins fragile X syndrome neuroscience science

58 notes

These scientists are ‘itching’ to help you stop scratching



Itch and scratch, itch and scratch.  It’s not the most serious physical problem in our lives, but it is common and it is very annoying. Now, researchers at the Hebrew University of Jerusalem and in Boston have come up with new findings that can stop the itching through silencing the neurons that transmit itch-generating stimuli.
The research was a collaborative effort by a group led by Dr. Alex Binshtok at the Hebrew University’s Department of Medical Neurobiology at the Institute of Medical Research Israel-Canada, and the Edmond & Lily Safra Center for Brain Sciences; along with Dr. Clifford Woolf’s group in the Boston Children’s Hospital and Harvard Medical School.
The study demonstrated the presence of functionally distinct sets of neurons that detect and transmit itch-generating stimuli. The researchers were further able to demonstrate that they could selectively target and silence those itch-generating neurons while active. These results provide a basis for the development of novel therapeutic approaches for selective treatment of previously unmet itching not induced by histamine (non-histaminergic itch), such as dry skin itch and allergic dermatitis.
(Histaminergic itch is brought on when histamine triggers an inflammatory immune response to foreign agents, such as occurs, for example, in hay fever.)
The findings of the Israeli-US researchers were published in the journal Nature Neuroscience. In addition to the senior researchers, student major contributors to the project were Sagi Gudes and Felix Blasl from the Hebrew University; and David Roberson and Jared Sprague from Harvard Medical School.
Itch is a complex, unpleasant, cutaneous sensation that in some respects resembles pain, yet is different in terms of its intrinsic sensory quality and the urge to scratch. Although some types of itch like urticaria (hives) could be effectively treated with anti-histaminergic agents, itch accompanying most chronic itch-inducing diseases, including atopic dermatitis (eczema), allergic itch and dry skin itch, is not predominantly induced by histamine. An understanding of the molecular and cellular mechanisms underlying the sensation of itch, therefore, is essential for the development of effective and selective treatment of itch, which in some cases could become a devastating condition, say the researchers.
The researchers’ findings suggest that primary itch-generating neurons that carry messages toward the central nervous system code functionally distinct histaminergic and non-histaminergic itch pathways that could be selectively blocked. This is the first time that this has been demonstrated, and means that it is possible to block itch signals in the neurons that mediate non-histamine itch. 
These findings have a great clinical importance since they could be translated into novel, selective and effective therapies for previously largely untreated dry skin itch and allergic dermatitis itch.

These scientists are ‘itching’ to help you stop scratching

Itch and scratch, itch and scratch.  It’s not the most serious physical problem in our lives, but it is common and it is very annoying. Now, researchers at the Hebrew University of Jerusalem and in Boston have come up with new findings that can stop the itching through silencing the neurons that transmit itch-generating stimuli.

The research was a collaborative effort by a group led by Dr. Alex Binshtok at the Hebrew University’s Department of Medical Neurobiology at the Institute of Medical Research Israel-Canada, and the Edmond & Lily Safra Center for Brain Sciences; along with Dr. Clifford Woolf’s group in the Boston Children’s Hospital and Harvard Medical School.

The study demonstrated the presence of functionally distinct sets of neurons that detect and transmit itch-generating stimuli. The researchers were further able to demonstrate that they could selectively target and silence those itch-generating neurons while active. These results provide a basis for the development of novel therapeutic approaches for selective treatment of previously unmet itching not induced by histamine (non-histaminergic itch), such as dry skin itch and allergic dermatitis.

(Histaminergic itch is brought on when histamine triggers an inflammatory immune response to foreign agents, such as occurs, for example, in hay fever.)

The findings of the Israeli-US researchers were published in the journal Nature Neuroscience. In addition to the senior researchers, student major contributors to the project were Sagi Gudes and Felix Blasl from the Hebrew University; and David Roberson and Jared Sprague from Harvard Medical School.

Itch is a complex, unpleasant, cutaneous sensation that in some respects resembles pain, yet is different in terms of its intrinsic sensory quality and the urge to scratch. Although some types of itch like urticaria (hives) could be effectively treated with anti-histaminergic agents, itch accompanying most chronic itch-inducing diseases, including atopic dermatitis (eczema), allergic itch and dry skin itch, is not predominantly induced by histamine. An understanding of the molecular and cellular mechanisms underlying the sensation of itch, therefore, is essential for the development of effective and selective treatment of itch, which in some cases could become a devastating condition, say the researchers.

The researchers’ findings suggest that primary itch-generating neurons that carry messages toward the central nervous system code functionally distinct histaminergic and non-histaminergic itch pathways that could be selectively blocked. This is the first time that this has been demonstrated, and means that it is possible to block itch signals in the neurons that mediate non-histamine itch. 

These findings have a great clinical importance since they could be translated into novel, selective and effective therapies for previously largely untreated dry skin itch and allergic dermatitis itch.

Filed under itch sensory neurons histamine neuroscience science

89 notes

Study Expands Concerns About Anesthesia’s Impact on the Brain
As pediatric specialists become increasingly aware that surgical anesthesia may have lasting effects on the developing brains of young children, new research suggests the threat may also apply to adult brains.
Researchers from Cincinnati Children’s Hospital Medical Center report June 5 the Annals of Neurology that testing in laboratory mice shows anesthesia’s neurotoxic effects depend on the age of brain neurons – not the age of the animal undergoing anesthesia, as once thought.
Although more research is needed to confirm the study’s relevance to humans, the study suggests possible health implications for millions of children and adults who undergo surgical anesthesia annually, according to Andreas Loepke, MD, PhD, a physician and researcher in the Department of Anesthesiology.
“We demonstrate that anesthesia-induced cell death in neurons is not limited to the immature brain, as previously believed,” said Loepke. “Instead, vulnerability seems to target neurons of a certain age and maturational stage. This finding brings us a step closer to understanding the phenomenon’s underlying mechanism”.
New neurons are generated abundantly in most regions of the very young brain, explaining why previous research has focused on that developmental stage. In a mature brain, neuron formation slows considerably, but extends into later life in dentate gyrus and olfactory bulb.
The dentate gyrus, which helps control learning and memory, is the region Loepke and his research colleagues paid particular attention to in their study. Also collaborating were researchers from the University of Cincinnati College of Medicine and the Children’s Hospital of Fudan University, Shanghai, China.
Researchers exposed newborn, juvenile and young adult mice to a widely used anesthetic called isoflurane in doses approximating those used in surgical practice. Newborn mice exhibited widespread neuronal loss in forebrain structures – confirming previous research – with no significant impact on the dentate gyrus. However, the effect in juvenile mice was reversed, with minimal neuronal impact in the forebrain regions and significant cell death in the dentate gyrus.
The team then performed extensive studies to discover that age and maturational stage of the affected neurons were the defining characteristics for vulnerability to anesthesia-induced neuronal cell death. The researchers observed similar results in young adult mice as well.
Research over the past 10 years has made it increasingly clear that commonly used anesthetics increase brain cell death in developing animals, raising concerns from the Food and Drug Administration, clinicians, neuroscientists and the public. As well, several follow-up studies in children and adults who have undergone surgical anesthesia show a link to learning and memory impairment.
Cautioning against immediate application of the current study’s findings to children and adults undergoing anesthesia, Loepke said his research team is trying to learn enough about anesthesia’s impact on brain chemistry to develop protective therapeutic strategies, in case they are needed. To this end, their next step is to identify specific molecular processes triggered by anesthesia that lead to brain cell death.
“Surgery is often vital to save lives or maintain quality of life and usually cannot be performed without general anesthesia,” Loepke said. “Physicians should carefully discuss with patients, parents and caretakers the risks and benefits of procedures requiring anesthetics, as well as the known risks of not treating certain conditions.”
Loepke is also collaborating with researchers from the Pediatric Neuroimaging Research Consortium at Cincinnati Children’s Hospital Medical Center to examine anesthesia’s impact on children’s brain using non-invasive magnetic resonance imaging (MRI) technology.

Study Expands Concerns About Anesthesia’s Impact on the Brain

As pediatric specialists become increasingly aware that surgical anesthesia may have lasting effects on the developing brains of young children, new research suggests the threat may also apply to adult brains.

Researchers from Cincinnati Children’s Hospital Medical Center report June 5 the Annals of Neurology that testing in laboratory mice shows anesthesia’s neurotoxic effects depend on the age of brain neurons – not the age of the animal undergoing anesthesia, as once thought.

Although more research is needed to confirm the study’s relevance to humans, the study suggests possible health implications for millions of children and adults who undergo surgical anesthesia annually, according to Andreas Loepke, MD, PhD, a physician and researcher in the Department of Anesthesiology.

“We demonstrate that anesthesia-induced cell death in neurons is not limited to the immature brain, as previously believed,” said Loepke. “Instead, vulnerability seems to target neurons of a certain age and maturational stage. This finding brings us a step closer to understanding the phenomenon’s underlying mechanism”.

New neurons are generated abundantly in most regions of the very young brain, explaining why previous research has focused on that developmental stage. In a mature brain, neuron formation slows considerably, but extends into later life in dentate gyrus and olfactory bulb.

The dentate gyrus, which helps control learning and memory, is the region Loepke and his research colleagues paid particular attention to in their study. Also collaborating were researchers from the University of Cincinnati College of Medicine and the Children’s Hospital of Fudan University, Shanghai, China.

Researchers exposed newborn, juvenile and young adult mice to a widely used anesthetic called isoflurane in doses approximating those used in surgical practice. Newborn mice exhibited widespread neuronal loss in forebrain structures – confirming previous research – with no significant impact on the dentate gyrus. However, the effect in juvenile mice was reversed, with minimal neuronal impact in the forebrain regions and significant cell death in the dentate gyrus.

The team then performed extensive studies to discover that age and maturational stage of the affected neurons were the defining characteristics for vulnerability to anesthesia-induced neuronal cell death. The researchers observed similar results in young adult mice as well.

Research over the past 10 years has made it increasingly clear that commonly used anesthetics increase brain cell death in developing animals, raising concerns from the Food and Drug Administration, clinicians, neuroscientists and the public. As well, several follow-up studies in children and adults who have undergone surgical anesthesia show a link to learning and memory impairment.

Cautioning against immediate application of the current study’s findings to children and adults undergoing anesthesia, Loepke said his research team is trying to learn enough about anesthesia’s impact on brain chemistry to develop protective therapeutic strategies, in case they are needed. To this end, their next step is to identify specific molecular processes triggered by anesthesia that lead to brain cell death.

“Surgery is often vital to save lives or maintain quality of life and usually cannot be performed without general anesthesia,” Loepke said. “Physicians should carefully discuss with patients, parents and caretakers the risks and benefits of procedures requiring anesthetics, as well as the known risks of not treating certain conditions.”

Loepke is also collaborating with researchers from the Pediatric Neuroimaging Research Consortium at Cincinnati Children’s Hospital Medical Center to examine anesthesia’s impact on children’s brain using non-invasive magnetic resonance imaging (MRI) technology.

Filed under anesthesia neurons cell death apoptosis dentate gyrus neurology neuroscience science

115 notes

Neurochemical Traffic Signals May Open New Avenues for the Treatment of Schizophrenia
Researchers at Boston University School of Medicine (BUSM) have uncovered important clues about a biochemical pathway in the brain that may one day expand treatment options for schizophrenia. The study, published online in the journal Molecular Pharmacology, was led by faculty within the department of pharmacology and experimental therapeutics at BUSM.
Patients with schizophrenia suffer from a life-long condition that can produce delusions, disordered thinking, and breaks with reality. A number of treatments are available for schizophrenia, but many patients do not respond to these therapies or experience side effects that limit their use.
This research focused on key components of the brain known as NMDA receptors. These receptors are located on nerve cells in the brain and serve as biochemical gates that allow calcium ions (electrical charges) to enter the cell when a neurotransmitter, such as glutamate, binds to the receptor. Proper activation of these receptors is critical for sensory perception, memory and learning, including the transfer of short-term memory into long-term storage. Patients with schizophrenia have poorly functioning or “hypoactive” NMDA receptors, suggesting the possibility of treatment with drugs that positively affect these receptors. Currently the only way to enhance NMDA receptor function is through the use of agents called agonists that directly bind to the receptor on the outer surface of the cell, opening the gates to calcium ions outside the cell.
In this study, the researchers discovered a novel “non-canonical” pathway in which NMDA receptors residing inside the cell are stimulated by a neuroactive steroid to migrate to the cell surface (a process known as trafficking), thus increasing the number of receptors available for glutamate activation. The researchers treated neural cells from the cerebral cortex with the novel steroid pregnenolone sulfate (PregS) and found that the number of working NMDA receptors on the cell surface increased by 60 to 100 percent within 10 minutes. The exact mechanism by which this occurs is not completely clear, but it appears that PregS increases calcium ions within the cell, which in turn produces a green light signal for more frequent trafficking of NMDA receptors to the cell surface.
Although still in the early stages, further research in this area may be instrumental in the development of treatments not only for schizophrenia, but also for other conditions associated with malfunctioning NMDA receptors, such as age-related decreases in memory and learning ability.

Neurochemical Traffic Signals May Open New Avenues for the Treatment of Schizophrenia

Researchers at Boston University School of Medicine (BUSM) have uncovered important clues about a biochemical pathway in the brain that may one day expand treatment options for schizophrenia. The study, published online in the journal Molecular Pharmacology, was led by faculty within the department of pharmacology and experimental therapeutics at BUSM.

Patients with schizophrenia suffer from a life-long condition that can produce delusions, disordered thinking, and breaks with reality. A number of treatments are available for schizophrenia, but many patients do not respond to these therapies or experience side effects that limit their use.

This research focused on key components of the brain known as NMDA receptors. These receptors are located on nerve cells in the brain and serve as biochemical gates that allow calcium ions (electrical charges) to enter the cell when a neurotransmitter, such as glutamate, binds to the receptor. Proper activation of these receptors is critical for sensory perception, memory and learning, including the transfer of short-term memory into long-term storage. Patients with schizophrenia have poorly functioning or “hypoactive” NMDA receptors, suggesting the possibility of treatment with drugs that positively affect these receptors. Currently the only way to enhance NMDA receptor function is through the use of agents called agonists that directly bind to the receptor on the outer surface of the cell, opening the gates to calcium ions outside the cell.

In this study, the researchers discovered a novel “non-canonical” pathway in which NMDA receptors residing inside the cell are stimulated by a neuroactive steroid to migrate to the cell surface (a process known as trafficking), thus increasing the number of receptors available for glutamate activation. The researchers treated neural cells from the cerebral cortex with the novel steroid pregnenolone sulfate (PregS) and found that the number of working NMDA receptors on the cell surface increased by 60 to 100 percent within 10 minutes. The exact mechanism by which this occurs is not completely clear, but it appears that PregS increases calcium ions within the cell, which in turn produces a green light signal for more frequent trafficking of NMDA receptors to the cell surface.

Although still in the early stages, further research in this area may be instrumental in the development of treatments not only for schizophrenia, but also for other conditions associated with malfunctioning NMDA receptors, such as age-related decreases in memory and learning ability.

Filed under schizophrenia NMDA receptors nerve cells calcium ions glutamate trafficking neuroscience science

156 notes

Pioneering Study Demonstrates Benefit of Imaging Technique in Identifying Mental Illness
MRI may be an effective way to diagnose mental illnesses such as bipolar disorder, according to experts from the Icahn School of Medicine at Mount Sinai. In a landmark study using advanced techniques, the researchers were able to correctly distinguish bipolar patients from healthy individuals based on their brain scans alone. The data are published in the journal Psychological Medicine.
Currently, most mental illnesses are diagnosed based on symptoms only, creating an urgent need for new approaches to diagnosis. In bipolar disorder, there may be a significant delay in diagnosis due to the complex clinical presentation of the illness. In this study, Sophia Frangou, MD, Professor of Psychiatry and Chief of the Psychosis Research Program at the Icahn School of Medicine at Mount Sinai teamed up with Andy Simmons, MD, of the Kings College London and Janaina Mourao-Miranda, MD, of University College London, to explore whether brain imaging could help correctly identify patients with bipolar disorder.
“Bipolar disorder affects patients’ ability to regulate their emotions successfully, which puts them at great disadvantage in their lives,” said Dr. Frangou. “The situation is made worse by unacceptably long delays, sometimes of up to 10 years, in making the correct diagnosis. Bipolar disorder may be easily misdiagnosed for other disorders, such as depression or schizophrenia. This is why bipolar disorder ranks among the top ten disorders causing significant disability worldwide.”
Dr. Frangou and her team used MRI to scan the brains of people with bipolar disorder and of healthy individuals. Using advanced computational models, they were successful in correctly separating people with bipolar disorder from healthy individuals with 73 percent accuracy using their brain imaging scans alone. They replicated their finding in a separate group of patients and healthy individuals and found a 72 percent accuracy rate.
Dr. Simmons added, “The level of accuracy we achieved is comparable to that of many other tests used in medicine. Additionally, brain scanning is very acceptable to patients as most people consider it a routine diagnostic test.”
“This approach does not undermine the importance of rigorous clinical assessment and the importance of building relationships with patients but provides biological justification for the type of diagnosis made,” said Dr. Frangou. “However, diagnostic imaging for psychiatry is still under investigation and not ready for widespread use. Nonetheless, our results together with those from other labs are a harbinger of a major shift in the way we approach diagnosis in psychiatry.”

Pioneering Study Demonstrates Benefit of Imaging Technique in Identifying Mental Illness

MRI may be an effective way to diagnose mental illnesses such as bipolar disorder, according to experts from the Icahn School of Medicine at Mount Sinai. In a landmark study using advanced techniques, the researchers were able to correctly distinguish bipolar patients from healthy individuals based on their brain scans alone. The data are published in the journal Psychological Medicine.

Currently, most mental illnesses are diagnosed based on symptoms only, creating an urgent need for new approaches to diagnosis. In bipolar disorder, there may be a significant delay in diagnosis due to the complex clinical presentation of the illness. In this study, Sophia Frangou, MD, Professor of Psychiatry and Chief of the Psychosis Research Program at the Icahn School of Medicine at Mount Sinai teamed up with Andy Simmons, MD, of the Kings College London and Janaina Mourao-Miranda, MD, of University College London, to explore whether brain imaging could help correctly identify patients with bipolar disorder.

“Bipolar disorder affects patients’ ability to regulate their emotions successfully, which puts them at great disadvantage in their lives,” said Dr. Frangou. “The situation is made worse by unacceptably long delays, sometimes of up to 10 years, in making the correct diagnosis. Bipolar disorder may be easily misdiagnosed for other disorders, such as depression or schizophrenia. This is why bipolar disorder ranks among the top ten disorders causing significant disability worldwide.”

Dr. Frangou and her team used MRI to scan the brains of people with bipolar disorder and of healthy individuals. Using advanced computational models, they were successful in correctly separating people with bipolar disorder from healthy individuals with 73 percent accuracy using their brain imaging scans alone. They replicated their finding in a separate group of patients and healthy individuals and found a 72 percent accuracy rate.

Dr. Simmons added, “The level of accuracy we achieved is comparable to that of many other tests used in medicine. Additionally, brain scanning is very acceptable to patients as most people consider it a routine diagnostic test.”

“This approach does not undermine the importance of rigorous clinical assessment and the importance of building relationships with patients but provides biological justification for the type of diagnosis made,” said Dr. Frangou. “However, diagnostic imaging for psychiatry is still under investigation and not ready for widespread use. Nonetheless, our results together with those from other labs are a harbinger of a major shift in the way we approach diagnosis in psychiatry.”

Filed under bipolar depression bipolar disorder neuroimaging MRI mental health psychology neuroscience science

119 notes

New technique for deep brain stimulation surgery proves accurate and safe
Surgery has been used for Parkinson’s disease and familial tremors, and also shows promise for other disorders
The surgeon who more than two decades ago pioneered deep brain stimulation surgery in the United States to treat people with Parkinson’s disease and other movement disorders has now developed a new way to perform the surgery — which allows for more accurate placement of the brain electrodes and likely is safer for patients.
The success and safety of the new surgical technique could have broad implications for deep brain stimulation, or DBS, surgery into the future, as it may increasingly be used to help with a wide range of medical issues beyond Parkinson’s disease and familial tremors.
The new surgery also offers another distinct advantage: patients are asleep during the surgery, rather than being awake under local anesthesia to help surgeons determine placement of the electrodes as happens with the traditional DBS surgery.
A study detailing the new surgical technique is being published in the June 2013 edition of the Journal of Neurosurgery, and has been published online at the journal’s website.
"I think this will be how DBS surgery will be done in most cases going forward," said Kim Burchiel, M.D., F.A.C.S., chair of neurological surgery at Oregon Health & Science University and the lead author of the Journal of Neurosurgery article. “This surgery allows for extremely accurate placement of the electrodes and it’s safer. Plus patients don’t need to be awake during this surgery — which will mean many more patients who can be helped by this surgery will now be willing to consider it.”
DBS surgery was first developed in France in 1987. Burchiel was the first surgeon in North America to perform the surgery, as part of a Food and Drug Administration-approved clinical trial in 1991.
The FDA approved the surgery for “essential tremor” in 1997 and for tremors associated with Parkinson’s disease in 2002. The surgery has been performed tens of thousands of times over the last decade or so in the United States, most often for familial tremor and Parkinson’s disease. Burchiel and his team at OHSU have performed the surgery more than 750 times.
The surgery involves implanting very thin wire electrodes in the brain, connected to something like a pacemaker implanted in the chest. The system then stimulates the brain to often significantly reduce the tremors.
For most of the last two decades, the DBS patient was required to be awake during surgery, to allow surgeons to determine through monitoring the patient’s symptoms and getting other conscious patient feedback whether the electrodes were placed in the right spots in the brain.
But the traditional form of the surgery had drawbacks. Many patients who might have benefitted weren’t willing to undergo the sometimes 4 to 6 hour surgery while awake. There also is a small chance of hemorrhaging in the brain as the surgeon places or moves the electrodes to the right spot in the brain.
The new technique uses advances in brain imaging in recent years to place the electrodes more safely, and more accurately, than in traditional DBS surgery. The surgical team uses CT scanning during the surgery itself, along with an MRI of the patient’s brain before the surgery, to precisely place the electrodes in the brain, while better ensuring no hemorrhaging or complications from the insertion of the electrode.
The Journal of Neurosurgery article reported on 60 patients who had the surgery at OHSU over an 18-month period beginning in early 2011.
"What our results say is that it’s safe, that we had no hemorrhaging or complications at all — and the accuracy of the electrode placement is the best ever reported," Burchiel said.
Burchiel and his team have done another 140 or so surgeries with the new procedure since enrollment in the study ended. OHSU was the first center to pioneer the new DBS procedure, but other surgical teams across the U.S. are learning the technique at OHSU, and bringing it back to their own centers.
The positive results with the new DBS technique could have ramifications as medical researchers nationwide continue to explore possible new uses for DBS surgery. DBS surgery has shown promising results in clinical trials with some Alzheimer’s patients, with some forms of depression and even with obesity.
If the early promising results for these conditions are confirmed, the number of people who might be candidates for DBS surgery could expand greatly, Burchiel said.
The length of the new surgery for the 60 patients involved in the study was slightly longer than traditional DBS surgery. But as Burchiel and his team have developed the new surgical technique, the new DBS surgeries are usually much shorter, often taking half the time of the more traditional approach. Given that, and that the electrodes are placed more accurately and the surgery is cheaper to perform, the new DBS surgery likely will be the technique most surgeons will use in coming years, Burchiel said.
DBS surgery often helps significantly reduce tremors in patients with familial tremor and tremors and other symptoms in Parkinson’s disease. A parallel study is ongoing at OHSU to assess how symptoms of the patients have improved since their DBS surgery using this new method.
(Image: Dr Frank Gaillard)

New technique for deep brain stimulation surgery proves accurate and safe

Surgery has been used for Parkinson’s disease and familial tremors, and also shows promise for other disorders

The surgeon who more than two decades ago pioneered deep brain stimulation surgery in the United States to treat people with Parkinson’s disease and other movement disorders has now developed a new way to perform the surgery — which allows for more accurate placement of the brain electrodes and likely is safer for patients.

The success and safety of the new surgical technique could have broad implications for deep brain stimulation, or DBS, surgery into the future, as it may increasingly be used to help with a wide range of medical issues beyond Parkinson’s disease and familial tremors.

The new surgery also offers another distinct advantage: patients are asleep during the surgery, rather than being awake under local anesthesia to help surgeons determine placement of the electrodes as happens with the traditional DBS surgery.

A study detailing the new surgical technique is being published in the June 2013 edition of the Journal of Neurosurgery, and has been published online at the journal’s website.

"I think this will be how DBS surgery will be done in most cases going forward," said Kim Burchiel, M.D., F.A.C.S., chair of neurological surgery at Oregon Health & Science University and the lead author of the Journal of Neurosurgery article. “This surgery allows for extremely accurate placement of the electrodes and it’s safer. Plus patients don’t need to be awake during this surgery — which will mean many more patients who can be helped by this surgery will now be willing to consider it.”

DBS surgery was first developed in France in 1987. Burchiel was the first surgeon in North America to perform the surgery, as part of a Food and Drug Administration-approved clinical trial in 1991.

The FDA approved the surgery for “essential tremor” in 1997 and for tremors associated with Parkinson’s disease in 2002. The surgery has been performed tens of thousands of times over the last decade or so in the United States, most often for familial tremor and Parkinson’s disease. Burchiel and his team at OHSU have performed the surgery more than 750 times.

The surgery involves implanting very thin wire electrodes in the brain, connected to something like a pacemaker implanted in the chest. The system then stimulates the brain to often significantly reduce the tremors.

For most of the last two decades, the DBS patient was required to be awake during surgery, to allow surgeons to determine through monitoring the patient’s symptoms and getting other conscious patient feedback whether the electrodes were placed in the right spots in the brain.

But the traditional form of the surgery had drawbacks. Many patients who might have benefitted weren’t willing to undergo the sometimes 4 to 6 hour surgery while awake. There also is a small chance of hemorrhaging in the brain as the surgeon places or moves the electrodes to the right spot in the brain.

The new technique uses advances in brain imaging in recent years to place the electrodes more safely, and more accurately, than in traditional DBS surgery. The surgical team uses CT scanning during the surgery itself, along with an MRI of the patient’s brain before the surgery, to precisely place the electrodes in the brain, while better ensuring no hemorrhaging or complications from the insertion of the electrode.

The Journal of Neurosurgery article reported on 60 patients who had the surgery at OHSU over an 18-month period beginning in early 2011.

"What our results say is that it’s safe, that we had no hemorrhaging or complications at all — and the accuracy of the electrode placement is the best ever reported," Burchiel said.

Burchiel and his team have done another 140 or so surgeries with the new procedure since enrollment in the study ended. OHSU was the first center to pioneer the new DBS procedure, but other surgical teams across the U.S. are learning the technique at OHSU, and bringing it back to their own centers.

The positive results with the new DBS technique could have ramifications as medical researchers nationwide continue to explore possible new uses for DBS surgery. DBS surgery has shown promising results in clinical trials with some Alzheimer’s patients, with some forms of depression and even with obesity.

If the early promising results for these conditions are confirmed, the number of people who might be candidates for DBS surgery could expand greatly, Burchiel said.

The length of the new surgery for the 60 patients involved in the study was slightly longer than traditional DBS surgery. But as Burchiel and his team have developed the new surgical technique, the new DBS surgeries are usually much shorter, often taking half the time of the more traditional approach. Given that, and that the electrodes are placed more accurately and the surgery is cheaper to perform, the new DBS surgery likely will be the technique most surgeons will use in coming years, Burchiel said.

DBS surgery often helps significantly reduce tremors in patients with familial tremor and tremors and other symptoms in Parkinson’s disease. A parallel study is ongoing at OHSU to assess how symptoms of the patients have improved since their DBS surgery using this new method.

(Image: Dr Frank Gaillard)

Filed under deep brain stimulation parkinson's disease neuroimaging medicine neuroscience science

114 notes

Helicopter takes to the skies with the power of thought

A remote controlled helicopter has been flown through a series of hoops around a college gymnasium in Minnesota.

It sounds like your everyday student project; however, there is one caveat…the helicopter was controlled using just the power of thought.

The experiments have been performed by researchers hoping to develop future robots that can help restore the autonomy of paralysed victims or those suffering from neurodegenerative disorders.

Their study has been published today, 4 June 2013, in IOP Publishing’s Journal of Neural Engineering and is accompanied by a video of the helicopter control in action. 

There were five subjects (three female, two male) who took part in the study and each one was able to successfully control the four-blade helicopter, also known as a quadcopter, quickly and accurately for a sustained amount of time.

Lead author of the study Professor Bin He, from the University of Minnesota College of Science and Engineering, said: “Our study shows that for the first time, humans are able to control the flight of flying robots using just their thoughts, sensed from noninvasive brain waves.”

The noninvasive technique used was electroencephalography (EEG), which recorded the electrical activity of the subjects’ brain through a cap fitted with 64 electrodes.

Facing away from the quadcopter, the subjects were asked to imagine using their right hand, left hand, and both hands together; this would instruct the quadcopter to turn right, left, lift, and then fall, respectively. The quadcopter was driven with a pre-set forward moving velocity and controlled through the sky with the subject’s thoughts.

The subjects were positioned in front of a screen which relayed images of the quadcopter’s flight through an on-board camera, allowing them to see which direction it was travelling in. Brain signals were recorded by the cap and sent to the quadcopter over WiFi.

“In previous work we showed that humans could control a virtual helicopter using just their thoughts. I initially intended to use a small helicopter for this real-life study; however, the quadcopter is more stable, smooth and has fewer safety concerns,” continued Professor He.

After several different training sessions, the subjects were required to fly the quadcopter through two foam rings suspended from the gymnasium ceiling and were scored on three aspects: the number of times they sent the quadcopter through the rings; the number of times the quadcopter collided with the rings; and the number of times they went outside the experiment boundary.

A number of statistical tests were used to calculate how each subject performed.

A group of subjects also directed the quadcopter with a keyboard in a control experiment, allowing for a comparison between a standardised method and brain control.

This process is just one example of a brain–computer interface where a direct pathway between the brain and an external device is created to help assist, augment or repair human cognitive or sensory-motor functions; researchers are currently looking at ways to restore hearing, sight and movement using this approach.

“Our next goal is to control robotic arms using noninvasive brain wave signals, with the eventual goal of developing brain–computer interfaces that aid patients with disabilities or neurodegenerative disorders,” continued Professor He.

Filed under neurodegenerative diseases quadcopter brainwaves EEG BCI robotics neuroscience science

473 notes

A 20-minute bout of yoga stimulates brain function immediately after
Researchers report that a single, 20-minute session of Hatha yoga significantly improved participants’ speed and accuracy on tests of working memory and inhibitory control, two measures of brain function associated with the ability to maintain focus and take in, retain and use new information. Participants performed significantly better immediately after the yoga practice than after moderate to vigorous aerobic exercise for the same amount of time.
The 30 study subjects were young, female, undergraduate students. The new findings appear in the Journal of Physical Activity and Health.
“Yoga is an ancient Indian science and way of life that includes not only physical movements and postures but also regulated breathing and meditation,” said Neha Gothe, who led the study while a graduate student at the University of Illinois at Urbana-Champaign. Gothe now is a professor of kinesiology, health and sport studies at Wayne State University in Detroit. “The practice involves an active attentional or mindfulness component but its potential benefits have not been thoroughly explored.”
“Yoga is becoming an increasingly popular form of exercise in the U.S. and it is imperative to systematically examine its health benefits, especially the mental health benefits that this unique mind-body form of activity may offer,” said Illinois kinesiology and community health professor Edward McAuley, who directs the Exercise Psychology Laboratory where the study was conducted.
The yoga intervention involved a 20-minute progression of seated, standing and supine yoga postures that included isometric contraction and relaxation of different muscle groups and regulated breathing. The session concluded with a meditative posture and deep breathing.
Participants also completed an aerobic exercise session where they walked or jogged on a treadmill for 20 minutes. Each subject worked out at a suitable speed and incline of the treadmill, with the goal of maintaining 60 to 70 percent of her maximum heart rate throughout the exercise session.
“This range was chosen to replicate previous findings that have shown improved cognitive performance in response to this intensity,” the researchers reported.
Gothe and her colleagues were surprised to see that participants showed more improvement in their reaction times and accuracy on cognitive tasks after yoga practice than after the aerobic exercise session, which showed no significant improvements on the working memory and inhibitory control scores.
“It appears that following yoga practice, the participants were better able to focus their mental resources, process information quickly, more accurately and also learn, hold and update pieces of information more effectively than after performing an aerobic exercise bout,” Gothe said. “The breathing and meditative exercises aim at calming the mind and body and keeping distracting thoughts away while you focus on your body, posture or breath. Maybe these processes translate beyond yoga practice when you try to perform mental tasks or day-to-day activities.”
Many factors could explain the results, Gothe said. “Enhanced self-awareness that comes with meditational exercises is just one of the possible mechanisms. Besides, meditation and breathing exercises are known to reduce anxiety and stress, which in turn can improve scores on some cognitive tests,” she said.
“We only examined the effects of a 20-minute bout of yoga and aerobic exercise in this study among female undergraduates,” McAuley said. “However, this study is extremely timely and the results will enable yoga researchers to power and design their interventions in the future. We see similar promising findings among older adults as well. Yoga research is in its nascent stages and with its increasing popularity across the globe, researchers need to adopt rigorous systematic approaches to examine not only its cognitive but also physical health benefits across the lifespan.”

A 20-minute bout of yoga stimulates brain function immediately after

Researchers report that a single, 20-minute session of Hatha yoga significantly improved participants’ speed and accuracy on tests of working memory and inhibitory control, two measures of brain function associated with the ability to maintain focus and take in, retain and use new information. Participants performed significantly better immediately after the yoga practice than after moderate to vigorous aerobic exercise for the same amount of time.

The 30 study subjects were young, female, undergraduate students. The new findings appear in the Journal of Physical Activity and Health.

“Yoga is an ancient Indian science and way of life that includes not only physical movements and postures but also regulated breathing and meditation,” said Neha Gothe, who led the study while a graduate student at the University of Illinois at Urbana-Champaign. Gothe now is a professor of kinesiology, health and sport studies at Wayne State University in Detroit. “The practice involves an active attentional or mindfulness component but its potential benefits have not been thoroughly explored.”

“Yoga is becoming an increasingly popular form of exercise in the U.S. and it is imperative to systematically examine its health benefits, especially the mental health benefits that this unique mind-body form of activity may offer,” said Illinois kinesiology and community health professor Edward McAuley, who directs the Exercise Psychology Laboratory where the study was conducted.

The yoga intervention involved a 20-minute progression of seated, standing and supine yoga postures that included isometric contraction and relaxation of different muscle groups and regulated breathing. The session concluded with a meditative posture and deep breathing.

Participants also completed an aerobic exercise session where they walked or jogged on a treadmill for 20 minutes. Each subject worked out at a suitable speed and incline of the treadmill, with the goal of maintaining 60 to 70 percent of her maximum heart rate throughout the exercise session.

“This range was chosen to replicate previous findings that have shown improved cognitive performance in response to this intensity,” the researchers reported.

Gothe and her colleagues were surprised to see that participants showed more improvement in their reaction times and accuracy on cognitive tasks after yoga practice than after the aerobic exercise session, which showed no significant improvements on the working memory and inhibitory control scores.

“It appears that following yoga practice, the participants were better able to focus their mental resources, process information quickly, more accurately and also learn, hold and update pieces of information more effectively than after performing an aerobic exercise bout,” Gothe said. “The breathing and meditative exercises aim at calming the mind and body and keeping distracting thoughts away while you focus on your body, posture or breath. Maybe these processes translate beyond yoga practice when you try to perform mental tasks or day-to-day activities.”

Many factors could explain the results, Gothe said. “Enhanced self-awareness that comes with meditational exercises is just one of the possible mechanisms. Besides, meditation and breathing exercises are known to reduce anxiety and stress, which in turn can improve scores on some cognitive tests,” she said.

“We only examined the effects of a 20-minute bout of yoga and aerobic exercise in this study among female undergraduates,” McAuley said. “However, this study is extremely timely and the results will enable yoga researchers to power and design their interventions in the future. We see similar promising findings among older adults as well. Yoga research is in its nascent stages and with its increasing popularity across the globe, researchers need to adopt rigorous systematic approaches to examine not only its cognitive but also physical health benefits across the lifespan.”

Filed under yoga hatha yoga working memory cognition cognitive performance meditation psychology neuroscience science

158 notes

Scientists map the wiring of the biological clock
The World Health Organization lists shift work as a potential carcinogen, says Erik Herzog, PhD, Professor of Biology in Arts & Sciences at Washington University in St. Louis. And that’s just one example among many of the troubles we cause ourselves when we override the biological clocks in our brains and pay attention instead to the mechanical clocks on our wrists.
In the June 5 issue of Neuron, Herzog and his colleagues report the discovery of a crucial part of the biological clock: the wiring that sets its accuracy to within a few minutes out of the 1440 minutes per day. This wiring uses the neurotransmitter, GABA, to connect the individual cells of the biological clock in a fast network that changes strength with time of day.
Daily rhythms of sleep and metabolism are driven by a biological clock in the suprachiasmatic nucleus (SCN), a structure in the brain made up of 20,000 neurons, all of which can keep daily (circadian) time individually.
If the SCN is to be a robust, but sensitive, timing system, the neurons must synchronize precisely with one another and adjust their rhythms to those of the environment.
Herzog’s lab has discovered a push-pull system in the SCN that does both. In 2005 they reported that the neurons in the clock network communicate by means of a neuropeptide (VIP) that pushes them to synchronize with one another. And, as they now report in Neuron, these neurons also communicate with GABA that pulls on them weakly, so they are not too tightly coupled.
Together these two networks (VIP and GABA) ensure the clock runs as coordinated, precise timepiece but one that can still adjust its timing to synchronize with the environment.
“We think the neurotransmitter network is there to introduce enough jitter into the system to allow the neurons to resynchronize when environmental cues change, as they do with the seasons,” Herzog says. But, he says, since this biological ‘reset button’ evolved long before mechanical clocks, artificial lights, and high-speed travel, it doesn’t introduce enough jitter to allow us to adjust quickly to the extreme time shifts of modern life, such as flying “backward” (east) through several time zones.
Understanding the push-pull system in the SCN has enormous implications for public health, bearing, as it does, on daylight saving times, shift work, school starting times, medical intern schedules, truck driver hours, and many other issues where the clock in the brain is pitted against the clock in the hand.
Synchronizing the cellular clocksThe “clock” inside each SCN neuron depends on the cyclic expression of a family of genes such as the Period (PER) genes. The expression of these genes and the neuron’s firing rate typically peak at mid-day and fall at night. The gene activity is like the cogs in a clock, and the electrical activity like the hands on the clock.

Each neuron in the SCN keeps time, but because they’re different cells, they have slightly different rhythms. Some run a little bit fast and others a bit slow. If the SCN as a whole is to function as a clock, its neurons need to synchronize with one another.
The goal of the recent work in the Herzog lab has been to figure out how the clock cells are connected to each other. “It wasn’t clear, for example, if each neuron communicated with just a few of its neighbors or with all of them,” Herzog says.

Mark Freeman, a graduate student in the lab, developed a method for recording the firing rate of about 100 neurons simultanously on a multi-electrode array. “You float the SCN neurons down gently,” Herzog says, “and the neurons will attach to the electrodes, creating a clock in a dish that will tick away for weeks or months.”

Using these electrode arrays, his lab demonstrated that the neurons in the SCN are synchronized by the exchange of the neuropeptide VIP (vasoactive intestinal polypeptide), which alters the expression of PER to speed up or slow down neurons until they are all in synch.

These synchronized networks are very precise, says Herzog. If you let them free-run in constant darkness they will lose or gain only a few minutes out of the 1,440 minutes in a day. So they’re accurate to within 1 or 2 percent.

But they’re ever so slightly off the 24-hour cycle tied to one turn of the planet on its axis. Over time they would drift far enough off that cycle to be of little use to us, unless they also had some means of synchronizing to local time.
Resetting the cellular clocksIn the article published in Neuron, Herzog and his colleagues report on a second network in the biological clock.

In this network the connections are made by the neurotransmitter GABA (γ-amino-butyric acid). “We proved we had found a GABAergic network by applying drugs that block GABA receptors on the cells,” Herzog says. “All of the connections we had mapped between neurons dropped out.”

Remarkably, when the network drops out, the clock becomes more precise. So the GABAergic network destabilizes the clock; it jiggles it a little.

Herzog points out that the GABAergic network, is sparse, weak and fast (much faster than the VIP network, which relies on the slower action of a neuropeptide), as you might expect a jitter-generator to be.

“We think the GABAergic network is there to let our clocks adjust to environmental cues, such as gradual, seasonal changes in sunrise and sunset,” says Herzog. 

It’s a bit like whacking an old television set that has lost vertical synch to get it to resynch with the broadcast signal.

But there isn’t enough jitter in the clock to allow it to make abrupt adjustments, such as the one-hour forward jump when Daylight Savings Time starts. That “spring forward” has been statistically shown to increase the likelihood of heart attacks and car accidents, Herzog says.

Some sleep aids, such as benzodiazepines, that activate the GABA receptors may make the circadian clock a little more jittery, helping people adjust to big time jumps, such as flying across time zones. “But we don’t yet know whether they can improve jetlag; if they do, we want to know if it is because they help you sleep on the long flight or because they help the biological clock adjust to the new time zone,” Herzog cautions.

In any case, it is clear that if people repeatedly force the clock to reset, they throw off more than sleep. The biological clock regulates metabolism and cell division as well as sleep/wake cycles. So shift work, for example, is associated both with metabolic disorders, such as diabetes, and with the unregulated cell division that characterizes cancer.
Fighting our biological clocks does a lot more than make us crabby coffee drinkers.

Scientists map the wiring of the biological clock

The World Health Organization lists shift work as a potential carcinogen, says Erik Herzog, PhD, Professor of Biology in Arts & Sciences at Washington University in St. Louis. And that’s just one example among many of the troubles we cause ourselves when we override the biological clocks in our brains and pay attention instead to the mechanical clocks on our wrists.

In the June 5 issue of Neuron, Herzog and his colleagues report the discovery of a crucial part of the biological clock: the wiring that sets its accuracy to within a few minutes out of the 1440 minutes per day. This wiring uses the neurotransmitter, GABA, to connect the individual cells of the biological clock in a fast network that changes strength with time of day.

Daily rhythms of sleep and metabolism are driven by a biological clock in the suprachiasmatic nucleus (SCN), a structure in the brain made up of 20,000 neurons, all of which can keep daily (circadian) time individually.

If the SCN is to be a robust, but sensitive, timing system, the neurons must synchronize precisely with one another and adjust their rhythms to those of the environment.

Herzog’s lab has discovered a push-pull system in the SCN that does both. In 2005 they reported that the neurons in the clock network communicate by means of a neuropeptide (VIP) that pushes them to synchronize with one another. And, as they now report in Neuron, these neurons also communicate with GABA that pulls on them weakly, so they are not too tightly coupled.

Together these two networks (VIP and GABA) ensure the clock runs as coordinated, precise timepiece but one that can still adjust its timing to synchronize with the environment.

“We think the neurotransmitter network is there to introduce enough jitter into the system to allow the neurons to resynchronize when environmental cues change, as they do with the seasons,” Herzog says. But, he says, since this biological ‘reset button’ evolved long before mechanical clocks, artificial lights, and high-speed travel, it doesn’t introduce enough jitter to allow us to adjust quickly to the extreme time shifts of modern life, such as flying “backward” (east) through several time zones.

Understanding the push-pull system in the SCN has enormous implications for public health, bearing, as it does, on daylight saving times, shift work, school starting times, medical intern schedules, truck driver hours, and many other issues where the clock in the brain is pitted against the clock in the hand.

Synchronizing the cellular clocks
The “clock” inside each SCN neuron depends on the cyclic expression of a family of genes such as the Period (PER) genes. The expression of these genes and the neuron’s firing rate typically peak at mid-day and fall at night. The gene activity is like the cogs in a clock, and the electrical activity like the hands on the clock.

Each neuron in the SCN keeps time, but because they’re different cells, they have slightly different rhythms. Some run a little bit fast and others a bit slow. If the SCN as a whole is to function as a clock, its neurons need to synchronize with one another.

The goal of the recent work in the Herzog lab has been to figure out how the clock cells are connected to each other. “It wasn’t clear, for example, if each neuron communicated with just a few of its neighbors or with all of them,” Herzog says.

Mark Freeman, a graduate student in the lab, developed a method for recording the firing rate of about 100 neurons simultanously on a multi-electrode array. “You float the SCN neurons down gently,” Herzog says, “and the neurons will attach to the electrodes, creating a clock in a dish that will tick away for weeks or months.”

Using these electrode arrays, his lab demonstrated that the neurons in the SCN are synchronized by the exchange of the neuropeptide VIP (vasoactive intestinal polypeptide), which alters the expression of PER to speed up or slow down neurons until they are all in synch.

These synchronized networks are very precise, says Herzog. If you let them free-run in constant darkness they will lose or gain only a few minutes out of the 1,440 minutes in a day. So they’re accurate to within 1 or 2 percent.

But they’re ever so slightly off the 24-hour cycle tied to one turn of the planet on its axis. Over time they would drift far enough off that cycle to be of little use to us, unless they also had some means of synchronizing to local time.

Resetting the cellular clocks
In the article published in Neuron, Herzog and his colleagues report on a second network in the biological clock.

In this network the connections are made by the neurotransmitter GABA (γ-amino-butyric acid). “We proved we had found a GABAergic network by applying drugs that block GABA receptors on the cells,” Herzog says. “All of the connections we had mapped between neurons dropped out.”

Remarkably, when the network drops out, the clock becomes more precise. So the GABAergic network destabilizes the clock; it jiggles it a little.

Herzog points out that the GABAergic network, is sparse, weak and fast (much faster than the VIP network, which relies on the slower action of a neuropeptide), as you might expect a jitter-generator to be.

“We think the GABAergic network is there to let our clocks adjust to environmental cues, such as gradual, seasonal changes in sunrise and sunset,” says Herzog. 

It’s a bit like whacking an old television set that has lost vertical synch to get it to resynch with the broadcast signal.

But there isn’t enough jitter in the clock to allow it to make abrupt adjustments, such as the one-hour forward jump when Daylight Savings Time starts. That “spring forward” has been statistically shown to increase the likelihood of heart attacks and car accidents, Herzog says.

Some sleep aids, such as benzodiazepines, that activate the GABA receptors may make the circadian clock a little more jittery, helping people adjust to big time jumps, such as flying across time zones. “But we don’t yet know whether they can improve jetlag; if they do, we want to know if it is because they help you sleep on the long flight or because they help the biological clock adjust to the new time zone,” Herzog cautions.

In any case, it is clear that if people repeatedly force the clock to reset, they throw off more than sleep. The biological clock regulates metabolism and cell division as well as sleep/wake cycles. So shift work, for example, is associated both with metabolic disorders, such as diabetes, and with the unregulated cell division that characterizes cancer.

Fighting our biological clocks does a lot more than make us crabby coffee drinkers.

Filed under biological clock circadian rhythms neurotransmitters suprachiasmatic nucleus neuroscience science

117 notes

Distinguishing REM sleep from other conscious states

Despite decades of research, little is known about the function of REM sleep, or the dreams that often accompany it. Rapid eye movements occur in most mammals, with a few exceptions like echidnas and dolphins. In humans, they be become common by the seventh month of pregnancy, and persist throughout life even in the congenitally blind. Researchers have developed techniques to perform a full electrical sleep analysis on subjects while they are simultaneously scanned inside an MRI machine. A new study in PNAS now reports that REM sleep can be distinguished from other states of consciousness by virtue of rhythmic correlations, and anticorrelations, between different areas of the brain.

image

Polysomnography is a comprehensive biophysical analysis used to gauge sleep state. Most of the recorded variables, like EEG, eye movements and heart rate, are electrical in nature. In addition, many other kinds of measurements are often included like body temperature, breathing rate, or blood oxygenation. Although these variables together paint a fairly reliable picture of depth of sleep, they have little to say about what might be going on in the brain during different states of consciousness.

To address this problem, the researchers in the PNAS study used blood-oxygen level dependent (BOLD) MRI to assess functional connectivity between different regions of the brain. Their main finding was that the BOLD signal time series during REM sleep showed strong correlation between the thalamus and the visual cortex, and strong anticorrelation between the thalamus a region of the brain known as the posterior cingulate gyrus. Furthermore, these relations showed clear rhythmic behavior with a relatively constant period of several seconds. This temporal scale corresponds roughly to many other phasic phenomena that are seen during REM sleep.

Some of the common electrically-recorded features of REM sleep have earned names for themselves by virtue of there uniqueness. The so-called sleep spindles and k-complexes have been associated with the cessation of emg activity, and the onset of the disconnection of the brain from the musculature. At the level specific neural systems, it has long been accepted that the major monoaminergic transmitter systems of the brain take a break during REM, while the cholinergic systems become tonically active. Monoamines are those amino-acid derived transmitters that have a single amine group like noradrenaline, serotonin or histamine.

The researchers sought to partition the brain into various sensorimotor regions, and other association areas they call the default mode network (DMN). The posterior cingulate area, together with the prefrontal cortex and inferior parietal areas are said to make up this DMN. Opposite the posterior cingulate area, on the external surface of the cortex in the inferior parietal lobe, is the angular gyrus. Lying at the top of the primary fold in the brain, this area may be said to be at the convex cusp of connectivity. In other words, axons projecting from this area have more immediate short range connectivity options available to them than perhaps anywhere else in the brain. Stroke this area out, and our most fine-grained functions—mathematical, verbal and ideological—are immediately lobotomized.

As BOLD signals change relatively slowly, and can only be measured relatively slowly, they are ultimately of limited value. Uncovering the mysteries of REM sleep, and why we dream, will require much more attention to anecdote and detail. For example, it is known binocular eye movements during REM sleep can be far from conjugate in both the vertical and horizontal planes. Those creatures that show reduced levels of REM sleep have also been shown to have a smaller corpus callosum, or frequently none at all. Something about the bilateral-binocular nature of the brain seems to feature strongly in REM sleep.

At the level of dreams, it is hard to escape the idea that they have some evolved purpose, though this is not yet within the realm of fact. Many among us have dreamt of waves or waterfalls only to awake with a crushing need to visit the bathroom. Other times we teeter at the edge of a cliff, obviously standing-in for the edge of the bed, or struggle to raise a limb to defend ourself against an imaginary foe, while in reality the limb has become hypoxic under our girth. Further removed from this base physiology, our dreams may reassemble our fears and struggles, and simultaneously exaggerate and trivialize emotional events with quizzically open-ended probes.

The synchrony and interconnection of the thalamus, only accessed at low resolution in the present study, remains of central importance in the study of conscious state. Closer inspection of sensorimotor and association areas within the thalamus itself, may continue to shed more light on these issues.

(Source: medicalxpress.com)

Filed under REM sleep polysomnography consciousness BOLD MRI neuroscience science

free counters