Posts tagged neuroscience

Posts tagged neuroscience
Botox may help stroke patients
Injecting botox into the arm muscles of stroke survivors, with severe spasticity, changes electrical activity in the brain and may assist with longer-term recovery, according to new research.
Researchers at NeuRA (Neuroscience Research Australia) monitored nerve activity in the arms and brains of stroke survivors before and after botulinum toxin (botox) injections in rigid and stiff muscles in the arm.
They found that botox indeed improved arm muscles, but also altered brain activity in the cortex – the brain region responsible for movement, memory, learning and thinking.
“Botulinum toxin is used to treat a range of muscular and neurological conditions and our data shows that this treatment results in electrical and functional changes within the brain itself”, says Dr William Huynh, lead author of the study and a research neurologist at NeuRA.
“This effect of botox on the brain may arise because the toxin travels to the central nervous system directly, or because muscles treated with botox are sending different signals back to the brain”.
“Either way, we found that botox treatment in affected muscles not only improves muscle disorders in stroke patients, but also normalises electrical activity in the brain, particularly in the half of the brain not damaged by stroke”.
“Restoring normal activity in the unaffected side of the brain is particularly important because we suspect that abnormal information sent from affected muscles to the brain may be disrupting patients’ long-term recovery”, Dr Huynh concluded.
This paper is published in the journal Muscle and Nerve.
The pain puzzle: Uncovering how morphine increases pain in some people
For individuals with agonizing pain, it is a cruel blow when the gold-standard medication actually causes more pain. Adults and children whose pain gets worse when treated with morphine may be closer to a solution, based on research published in the January 6 on-line edition of Nature Neuroscience.
"Our research identifies a molecular pathway by which morphine can increase pain, and suggests potential new ways to make morphine effective for more patients," says senior author Dr. Yves De Koninck, Professor at Université Laval in Quebec City. The team included researchers from The Hospital for Sick Children (SickKids) in Toronto, the Institut universitaire en santé mentale de Québec, the US and Italy.
New pathway in pain management
The research not only identifies a target pathway to suppress morphine-induced pain but teases apart the pain hypersensitivity caused by morphine from tolerance to morphine, two phenomena previously considered to be caused by the same mechanisms.
"When morphine doesn’t reduce pain adequately the tendency is to increase the dosage. If a higher dosage produces pain relief, this is the classic picture of morphine tolerance, which is very well known. But sometimes increasing the morphine can, paradoxically, makes the pain worse," explains co-author Dr. Michael Salter. Dr. Salter is Senior Scientist and Head of Neurosciences & Mental Health at SickKids, Professor of Physiology at University of Toronto, and Canada Research Chair in Neuroplasticity and Pain.
"Pain experts have thought tolerance and hypersensitivity (or hyperalgesia) are simply different reflections of the same response," says Dr. De Koninck, "but we discovered that cellular and signalling processes for morphine tolerance are very different from those of morphine-induced pain."
Dr. Salter adds, “We identified specialized cells – known as microglia – in the spinal cord as the culprit behind morphine-induced pain hypersensitivity. When morphine acts on certain receptors in microglia, it triggers the cascade of events that ultimately increase, rather than decrease, activity of the pain-transmitting nerve cells.”
The researchers also identified the molecule responsible for this side effect of morphine. “It’s a protein called KCC2, which regulates the transport of chloride ions and the proper control of sensory signals to the brain,” explains Dr. De Koninck. “Morphine inhibits the activity of this protein, causing abnormal pain perception. By restoring normal KCC2 activity we could potentially prevent pain hypersensitivity.” Dr. De Koninck and researchers at Université Laval are testing new molecules capable of preserving KCC2 functions and thus preventing hyperalgesia.
The KCC2 pathway appears to apply to short-term as well as to long-term morphine administration, says Dr. De Koninck. “Thus, we have the foundation for new strategies to improve the treatment of post-operative as well as chronic pain.”
Dr. Salter adds, “Our discovery could have a major impact on individuals with various types of intractable pain, such as that associated with cancer or nerve damage, who have stopped morphine or other opiate medications because of pain hypersensitivity.”
Cost of pain
Pain has been labelled the silent health crisis, afflicting tens of millions of people worldwide. Pain has a profound negative effect on the quality of human life. Pain affects nearly all aspects of human existence, with untreated or under-treated pain being the most common cause of disability. The Canadian Pain Society estimates that chronic pain affects at least one in five Canadians and costs Canada $55-60 billion per year, including health care expenses and lost productivity.
"People with incapacitating pain may be left with no alternatives when our most powerful medications intensify their suffering," says Dr. De Koninck, who is also Director of Cellular and Molecular Neuroscience at Institut universitaire en santé mentale de Québec.
Dr. Salter adds, “Pain interferes with many aspects of an individual’s life. Too often, patients with chronic pain feel abandoned and stigmatized. Among the many burdens on individuals and their families, chronic pain is linked to increased risk of suicide. The burden of chronic pain affects children and teens as well as adults.” These risks affect individuals with many types of pain, ranging from migraine and carpel-tunnel syndrome to cancer, AIDS, diabetes, traumatic injuries, Parkinson’s disease and dozens of other conditions.

Totally blind mice get sight back
Totally blind mice have had their sight restored by injections of light-sensing cells into the eye, UK researchers report. The team in Oxford said their studies closely resemble the treatments that would be needed in people with degenerative eye disease. Similar results have already been achieved with night-blind mice.
Experts said the field was advancing rapidly, but there were still questions about the quality of vision restored. Patients with retinitis pigmentosa gradually lose light-sensing cells from the retina and can become blind. The research team, at the University of Oxford, used mice with a complete lack of light-sensing photoreceptor cells in their retinas. The mice were unable to tell the difference between light and dark.
Reconstruction
They injected “precursor” cells which will develop into the building blocks of a retina once inside the eye. Two weeks after the injections a retina had formed, according to the findings presented in the Proceedings of the National Academy of Sciences journal. Prof Robert MacLaren said: “We have recreated the whole structure, basically it’s the first proof that you can take a completely blind mouse, put the cells in and reconstruct the entire light-sensitive layer.”
Previous studies have achieved similar results with mice that had a partially degenerated retina. Prof MacLaren said this was like “restoring a whole computer screen rather than repairing individual pixels”. The mice were tested to see if they fled being in a bright area, if their pupils constricted in response to light and had their brain scanned to see if visual information was being processed by the mind.
Vision
Prof Pete Coffee, from the Institute of Ophthalmology at University College London, said the findings were important as they looked at the “most clinically relevant and severe case” of blindness. “This is probably what you would need to do to restore sight in a patient that has lost their vision,” he said.
However, he said this and similar studies needed to show how good the recovered vision was as brain scans and tests of light sensitivity were not enough. He said: “Can they tell the difference between a nasty animal and something to eat?”
Prof Robin Ali published research in the journal Nature showing that transplanting cells could restore vision in night-blind mice and then showed the same technique worked in a range of mice with degenerated retinas. He said: “These papers demonstrate that it is possible to transplant photoreceptor cells into a range of mice even with a severe level of degeneration. “I think it’s great that another group is showing the utility of photoreceptor transplantation.”
Researchers are already trialling human embryonic stem cells, at Moorfields Eye Hospital, in patients with Stargardt’s disease. Early results suggest the technique is safe but reliable results will take several years.
Retinal chips or bionic eyes are also being trailed in patients with retinitis pigmentosa.
![Decoding Dreams
“[I was] somewhere, in a place like a studio to make a TV program or something,” a groggy study participant recounted (in Japanese). “A male person ran with short steps from the left side to the right side. Then, he tumbled.” The participant had recently been awoken by Masako Tamaki, a postdoc in the lab of neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan. He was lying in a functional magnetic resonance imaging (fMRI) scanner, doing his best to recall what he had been dreaming about. “He stumbled over something, and stood up while laughing, and said something,” the participant continued. “He said something to persons on the left side.”
At first blush, the story doesn’t seem particularly informative. But the study subject saw a man, not a woman. And he was inside some sort of workplace. That fragmented information is enough for Kamitani and his team, who recorded dream appearances of 20 key objects, such as “male” or “room,” and used a machine-learning algorithm to correlate those concepts with the fMRI images to find patterns that could be used to predict what people were dreaming about without having to wake them. Such information could help inform the study of why people dream, an elusive question in neurobiology, Kamitani says. “Knowing what is represented during sleep would help to understand the function of dreaming.”
Analyzing more than 200 dream reports—some 30–45 hours of interviews with each of three participants—Kamitani and his colleagues built a “dream-trained decoder” based on fMRI imagery of the V1, V2, and V3 areas of the visual cortex. “We find some rule, or mapping, or pattern between what the person is seeing and what activity is happening in the brain,” Kamitani explains. And it worked, according to Kamitani, who presented the results at the Society for Neuroscience meeting in New Orleans in October 2012, predicting whether or not the 20 objects occurred in dreams with 75–80 percent accuracy.
But while Kamitani’s dream-decoding study is interesting, says neurobiologist David Kahn of Harvard Medical School, the algorithms used are quite primitive, only providing a handful of clues about the dream’s content. “We still have a long way to go before we can actually re-create the story that is the dream,” he says. “This is almost science fiction, because we’re way, way far from it … [but] this is an added tool.”
“Decoding is very primitive,” Kamitani agrees, “but I think there are a lot of potentials.” One way to get a more complete picture of the dream is to increase the complexity of the decoder, he notes. In this first study, for example, the researchers focused on nouns representing visual objects, but going forward, Kamitani says he hopes to include other concepts, like verbs. “By analyzing that aspect we may be able to add some action aspects in the dream.”
Furthermore, researchers might not have to fully interpret the dream themselves to benefit from the new decoder. Instead, the clues gleaned from the fMRI images could simply be used to jog participants’ memories. “We know that dreams—even the most vivid dreams we remember, [like] nightmares or lucid dreams—are really fragile memories,” says Antonio Zadra, an experimental psychologist at the University of Montreal. “Unless you wrote it down or told it to someone in the morning, usually even before lunch, that memory will start fading. And by night, you might just have the essence.”
Unfortunately, that failing memory was the only resource for researchers studying dreams. Now, with a little bit of supplemental information, they may be able to help participants recall dreams more precisely. “The subjective reports are never complete,” Kamitani says. “By giving the subject what we reconstructed, they may remember something more.”
At an even more basic level, the decoder could help scientists understand what’s happening in the brain during dreaming. “To create this whole virtual world out of nothing—with no visual input or auditory input—is quite fascinating and undoubtedly very complex,” Zadra says. “This research will certainly help us better understand what brain areas are doing what, to even allow for this to happen.”
In Kamitani’s study, for example, the researchers found that areas of higher-level visual processing, which respond to more abstract features, were more useful for interpreting dream content than lower-level processing areas. This makes sense, given that those lower areas of the visual cortex are more closely connected to the direct input from the retina. But, Kamitani notes, this could simply have to do with the way the study was designed. “We didn’t train the decoder with low-level visual features,” such as shape or contrast, he says. “We just used the semantic category information.”
Indeed, given the richness of the dreaming experience, such visual qualities may well be encoded during sleep. “Your brain creates a whole virtual world for you when you are dreaming, complete with characters, settings, interactions, dialogues,” says Zadra. “But you’re actually in your bed asleep; there is no visual input. So your brain is literally creating this virtual world from A to Z.”](http://41.media.tumblr.com/72709436e67f6f626b5983ec400d64ca/tumblr_mg85901mGd1rog5d1o1_500.jpg)
“[I was] somewhere, in a place like a studio to make a TV program or something,” a groggy study participant recounted (in Japanese). “A male person ran with short steps from the left side to the right side. Then, he tumbled.” The participant had recently been awoken by Masako Tamaki, a postdoc in the lab of neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan. He was lying in a functional magnetic resonance imaging (fMRI) scanner, doing his best to recall what he had been dreaming about. “He stumbled over something, and stood up while laughing, and said something,” the participant continued. “He said something to persons on the left side.”
At first blush, the story doesn’t seem particularly informative. But the study subject saw a man, not a woman. And he was inside some sort of workplace. That fragmented information is enough for Kamitani and his team, who recorded dream appearances of 20 key objects, such as “male” or “room,” and used a machine-learning algorithm to correlate those concepts with the fMRI images to find patterns that could be used to predict what people were dreaming about without having to wake them. Such information could help inform the study of why people dream, an elusive question in neurobiology, Kamitani says. “Knowing what is represented during sleep would help to understand the function of dreaming.”
Analyzing more than 200 dream reports—some 30–45 hours of interviews with each of three participants—Kamitani and his colleagues built a “dream-trained decoder” based on fMRI imagery of the V1, V2, and V3 areas of the visual cortex. “We find some rule, or mapping, or pattern between what the person is seeing and what activity is happening in the brain,” Kamitani explains. And it worked, according to Kamitani, who presented the results at the Society for Neuroscience meeting in New Orleans in October 2012, predicting whether or not the 20 objects occurred in dreams with 75–80 percent accuracy.
But while Kamitani’s dream-decoding study is interesting, says neurobiologist David Kahn of Harvard Medical School, the algorithms used are quite primitive, only providing a handful of clues about the dream’s content. “We still have a long way to go before we can actually re-create the story that is the dream,” he says. “This is almost science fiction, because we’re way, way far from it … [but] this is an added tool.”
“Decoding is very primitive,” Kamitani agrees, “but I think there are a lot of potentials.” One way to get a more complete picture of the dream is to increase the complexity of the decoder, he notes. In this first study, for example, the researchers focused on nouns representing visual objects, but going forward, Kamitani says he hopes to include other concepts, like verbs. “By analyzing that aspect we may be able to add some action aspects in the dream.”
Furthermore, researchers might not have to fully interpret the dream themselves to benefit from the new decoder. Instead, the clues gleaned from the fMRI images could simply be used to jog participants’ memories. “We know that dreams—even the most vivid dreams we remember, [like] nightmares or lucid dreams—are really fragile memories,” says Antonio Zadra, an experimental psychologist at the University of Montreal. “Unless you wrote it down or told it to someone in the morning, usually even before lunch, that memory will start fading. And by night, you might just have the essence.”
Unfortunately, that failing memory was the only resource for researchers studying dreams. Now, with a little bit of supplemental information, they may be able to help participants recall dreams more precisely. “The subjective reports are never complete,” Kamitani says. “By giving the subject what we reconstructed, they may remember something more.”
At an even more basic level, the decoder could help scientists understand what’s happening in the brain during dreaming. “To create this whole virtual world out of nothing—with no visual input or auditory input—is quite fascinating and undoubtedly very complex,” Zadra says. “This research will certainly help us better understand what brain areas are doing what, to even allow for this to happen.”
In Kamitani’s study, for example, the researchers found that areas of higher-level visual processing, which respond to more abstract features, were more useful for interpreting dream content than lower-level processing areas. This makes sense, given that those lower areas of the visual cortex are more closely connected to the direct input from the retina. But, Kamitani notes, this could simply have to do with the way the study was designed. “We didn’t train the decoder with low-level visual features,” such as shape or contrast, he says. “We just used the semantic category information.”
Indeed, given the richness of the dreaming experience, such visual qualities may well be encoded during sleep. “Your brain creates a whole virtual world for you when you are dreaming, complete with characters, settings, interactions, dialogues,” says Zadra. “But you’re actually in your bed asleep; there is no visual input. So your brain is literally creating this virtual world from A to Z.”
Scientists explore the illusion of memory
A memory might seem like a permanent, precious essence carved deep into the circuits of the brain. But it is not. Instead, scientists are discovering that a memory changes every time you think about it.
"Every time you recall a memory, it becomes sensitive to disruption. Often that is used to incorporate new information into it." That’s the blunt assessment from one of the world’s leading experts on memory, Dr. Eric Kandel from Columbia University.
And that means our memories are not abstract snapshots stored forever in a bulging file in our mind, but rather, they’re a collection of brain cells — neurons that undergo chemical changes every time they’re engaged.
So when we think about something from the past, the memory is called up like a computer file, reviewed and revised in subtle ways, and then sent back to the brain’s archives, now modified slightly, updated, and changed.
As scientists increasingly understand the biological process of memory, they are also learning how to interrupt it, and that means they might one day be able to ease the pain of past trauma, or alter destructive habits and addictions, as though shaking an Etch A Sketch, erasing the scribbles on the mind, and starting fresh.
In his McGill University lab, researcher Karim Nader routinely erases the memory of his laboratory rats. But first he has to give them a memory and he does that by putting them in an isolation cubicle, playing a tone, and then delivering a small electrical shock to their feet.
Why good resolutions about taking up a physical activity can be hard to keep
The collective appraisal conducted by Inserm in 2008 highlighted the many preventive health benefits of regular physical activity. Such activity is limited, however, by our lifestyle in today’s industrial society. While varying degrees of physical inactivity may be partly explained by social causes, they are also rooted in biology.
“The inability to experience pleasure during physical activity, which is often quoted as one explanation why people partially or completely drop out of physical exercise programmes, is a clear sign that the biology of the nervous system is involved”, explains Francis Chaouloff.
But how exactly? The neurobiological mechanisms underlying physical inactivity had yet to be identified.
Francis Chaouloff (Giovanni Marsicano’s team at the NeuroCentre Magendie; Inserm joint research unit, Université Bordeaux Ségalen) and his team have now begun to decipher these mechanisms. Their work clearly identifies the endogenous cannabinoid (or endocannabinoid) system as playing a decisive role, in particular one of its brain receptors. This is by no means the first time that data has pointed to interactions between the endocannabinoid system, which is the target of delta9-tetrahydrocannabinol (the active ingredient of cannabis), and physical exercise. It was discovered ten years ago that physical exercise activated the endocannabinoid system in trained sportsmen, but its exact role remained a mystery for many years. Three years ago, the same research team in Bordeaux observed that when given the opportunity to use a running wheel, mutant mice lacking the CB1 cannabinoid receptor, which is the principal receptor of the endocannabinoid system in the brain, ran for a shorter time and over shorter distances than healthy mice. The research published in Biological Psychiatry this month seeks to understand how, where and why the lack of CB1 receptor reduces voluntary exercise performance (by 20 to 30%) in mice allowed access to a running wheel three hours per day.
The researchers used various lines of mutant mice for the CB1 receptor, together with pharmacological tools. They began by demonstrating that the CB1 receptor controlling running performance is located at the GABAergic nerve endings. They went on to show that the receptor is located in the ventral tegmental area of the brain, which is an area involved in motivational processes relating to reward, whether the reward is natural (food, sex) or associated with the consumption of psychoactive substances.
The production of new neurons in the adult normal cortex in response to the antidepressant, fluoxetine, is reported in a study published online this week in Neuropsychopharmacology.
The research team, which is based at the Institute for Comprehensive Medical Science, Fujita Health University, Aichi, has previously demonstrated that neural progenitor cells exist at the surface of the adult cortex, and, moreover, that ischemia enhances the generation of new inhibitory neurons from these neural progenitor cells. These cells were accordingly named “Layer 1 Inhibitory Neuron Progenitor cells” (L1-INP). However, until now it was not known whether L1-INP-related neurogenesis could be induced in the normal adult cortex.
Tsuyoshi Miyakawa, Koji Ohira, and their colleagues employed fluoxetine, a selective serotonin reuptake inhibitor, and one of the most widely used antidepressants, to stimulate the production of new neurons from L1-INP cells. A large percentage of these newly generated neurons were inhibitory GABAergic interneurons, and their generation coincided with a reduction in apoptotic cell death following ischemia. This finding highlights the potential neuroprotective response induced by this antidepressant drug. It also lends further support to the postulation that induction of adult neurogenesis in cortex is a relevant prevention/treatment option for neurodegenerative diseases and psychiatric disorders.
(Source: eurekalert.org)
USF and VA researchers find long-term consequences for those suffering traumatic brain injury
Researchers from the University of South Florida and colleagues at the James A. Haley Veterans’ Hospital studying the long-term consequences of traumatic brain injury (TBI) using rat models, have found that, overtime, TBI results in progressive brain deterioration characterized by elevated inflammation and suppressed cell regeneration. However, therapeutic intervention, even in the chronic stage of TBI, may still help prevent cell death.
Their study is published in the current issue of the journal PLOS ONE.
“In the U.S., an estimated 1.7 million people suffer from traumatic brain injury,” said Dr. Cesar V. Borlongan, professor and vice chair of the department of Neurosurgery and Brain Repair at the University of South Florida (USF). “In addition, TBI is responsible for 52,000 early deaths, accounts for 30 percent of all injury-related deaths, and costs approximately $52 billion yearly to treat.”
While TBI is generally considered an acute injury, secondary cell death caused by neuroinflammation and an impaired repair mechanism accompany the injury over time, said the authors. Long-term neurological deficits from TBI related to inflammation may cause more severe secondary injuries and predispose long-term survivors to age-related neurodegenerative diseases, such as Alzheimer’s disease, Parkinson’s disease and post-traumatic dementia.
Since the U.S. military has been involved in conflicts in Iraq and Afghanistan, the incidence of traumatic brain injury suffered by troops has increased dramatically, primarily from improvised explosive devices (IEDs), according to Martin Steele, Lieutenant General, U.S. Marine Corps (retired), USF associate vice president for veterans research, and executive director of Military Partnerships. In response, the U.S. Veterans Administration has increasingly focused on TBI research and treatment.
“Progressive injury to hippocampal, cortical and thalamic regions contributes to long-term cognitive damage post-TBI,” said study co-author Dr. Paul R. Sanberg, USF senior vice president for research and innovation. “Both military and civilian patients have shown functional and cognitive deficits resulting from TBI.”
Because TBI involves both acute and chronic stages, the researchers noted that animal model research on the chronic stages of TBI could provide insight into identifying therapeutic targets for treatment in the post-acute stage.
“Using animal models of TBI, our study investigated the prolonged pathological outcomes of TBI in different parts of the brain, such as the dorsal striatum, thalamus, corpus callosum white matter, hippocampus and cerebral peduncle,” explained Borlongan, the study’s lead author. “We found that a massive neuroinflammation after TBI causes a second wave of cell death that impairs cell proliferation and impedes the brain’s regenerative capabilities.”
Upon examining the rat brains eight weeks post-trauma, the researchers found “a significant up-regulation of activated microglia cells, not only in the area of direct trauma, but also in adjacent as well as distant areas.” The location of inflammation correlated with the cell loss and impaired cell proliferation researchers observed.
Microglia cells act as the first and main form of immune defense in the central nervous system and make up 20 percent of the total glial cell population within the brain. They are distributed across large regions throughout the brain and spinal cord.
“Our study found that cell proliferation was significantly affected by a cascade of neuroinflammatory events in chronic TBI and we identified the susceptibility of newly formed cells within neurologic niches and suppression of neurological repair,” wrote the authors.
The researchers concluded that, while the progressive deterioration of the TBI-affected brain over time suppressed efforts of repair, intervention, even in the chronic stage of TBI injury, could help further deterioration.
When initial computed tomography (CT) scans show bleeding within the brain after mild head injury, decisions about repeated CT scans should be based on the patient’s neurological condition, according to a report in the January issue of Neurosurgery, official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
The study questions the need for routinely obtaining repeated CT scans in patients with mild head trauma. “The available evidence indicates that it is unnecessary to schedule a repeat CT scan after mild head injury when patients are unchanged or improving neurologically,” according to the study by Dr. Saleh Almenawer and colleagues of McMaster University, Hamilton, Ont., Canada.
Are Repeated Scans Necessary after Mild Head Trauma?
In a review of their hospital’s trauma database, the researchers identified 445 adult patients with mild head injury who had evidence of intracranial hemorrhage (ICH)—bleeding within the brain—on an initial CT scan. In many trauma centers, it’s standard practice to schedule a second CT scan within 24 hours after ICH is detected, to make sure that the bleeding has not progressed.
To evaluate the need for routine repeated scans, Dr. Almenawer and colleagues looked at how many patients needed surgery or other additional treatments, and whether the change in treatment was triggered by changes in the patients’ neurological condition or based on the routine CT scan alone. (For patients whose neurological condition worsened, CT was performed immediately.)
Overall, 5.6 percent of the patients required a change in treatment after the second CT scan. Most of these patients underwent surgery (craniectomy) to relieve pressure on the brain. Nearly all patients who underwent further treatment developed neurological changes leading to immediate CT scanning.
Just two patients had a change in treatment based solely on routine repeated CT scans. Both of these patients received a drug (mannitol) to reduce intracranial pressure, rather than surgery
Decisions on CT Scans Can Be Based on Neurological Status
Dr. Almenawer and colleagues extended the same method to patients reported in 15 previous studies of CT scanning after mild head injury. Including the 445 new patients, the analysis included a total of 2,693 patients. Overall, 2.7 percent of patients had a change in management based on neurological changes. In contrast, just 0.6 percent had treatment changes based on CT scans only.
Bleeding within the brain is a potentially life-threatening condition, prompting routine repeated CT scans after even mild head injury. The researchers write, “Although CT scanners are very useful tools, in an era of diminishing resources and a need to justify medical costs, this practice needs to be evaluated.” Each scan also exposes the patient to radiation, contributing to increased cancer risk.
The new study questions the need for routine repeated CT scans, as long as the patient’s neurological condition is improving or stable. “In the absence of supporting data, we question the value of routine follow-up imaging given the associated accumulative increase in cost and risks,” Dr. Almenawer and coauthors conclude.
Neurological examination is the “simple yet important” predictive factor leading to changes in treatment and guiding the need for repeat CT scanning after mild head injury, the researchers add. They emphasize that their findings don’t necessarily apply to patients with more severe head injury.
(Source: newswise.com)

Professor Discovers New Information in the Understanding of Autism and Genetics
Research out of the George Washington University (GW), published in the journal Proceedings of the National Academy of Sciences (PNAS), reveals another piece of the puzzle in a genetic developmental disorder that causes behavioral diseases such as autism. Anthony-Samuel LaMantia, Ph.D., professor of pharmacology and physiology at the GW School of Medicine and Health Sciences (SMHS) and director of the GW Institute for Neuroscience, along with post-doctoral fellow Daniel Meechan, Ph.D. and Thomas Maynard, Ph.D., associate research professor of pharmacology and physiology at GW SMHS, authored the study titled “Cxcr4 regulation of interneuron migration is disrupted in 22q11.2 deletion syndrome.”
For the past nine years, LaMantia and his colleagues have been investigating how behavioral disorders such as autism, attention deficit hyperactivity disorder (ADHD), and schizophrenia arise during early brain development. His work published in PNAS focuses specifically on the effects diminished 22q11.2 gene dosage has on cortical circuit development.
This research shows for the first time that genetic lesions known to be associated with autism and other behavioral diseases disrupt cellular and molecular mechanisms that ensure normal development of a key type of cortical neuron: the interneuron. LaMantia and his colleagues had found previously that one type of cortical neuron, the projection neuron, is not generated in appropriate numbers during development in a mouse model of 22q11 Deletion Syndrome. In the current study published in PNAS, LaMantia found that interneurons, while made in the right numbers at their birthplace outside of the cortex, are not able to move properly into the cortex where they are needed to control cortical circuit activity. The research shows that the main reason they don’t move properly is due to diminished expression of activity of a key regulatory pathway for migration, the Cxcr4 cytokine receptor.
“This gives us two pieces of the puzzle for this genetic developmental disorder,” said LaMantia. “These two pieces tell us that in very early development, those with 22q11.2 deletion syndrome do not make enough cells in one case, and do not put the other cells in the right place. This occurs not because of some degenerative change, but because the mechanisms that make these cells and put them in the right place during the first step of development have gone awry due to mutation.”
The next step in LaMantia’s research is to probe further into the molecular mechanisms that disrupt the proliferation of projection neurons and migration of interneurons. “If we understand that better and understand its consequences, we can go about fixing it,” said LaMantia. “We want to understand why cortical circuits don’t get built properly due to the genetic deletion of chromosome 22.”
LaMantia recently received the latest installment of a 10-year RO1 grant from the National Institutes of Health and the Eunice Kennedy Shriver National Institute of Child Health & Human Development for his project, titled “Regulation of 22q11 Genes in Embryonic and Adult Forebrain.” This will allow him to further his research.
(Image: iStockphoto)