Neuroscience

Articles and news from the latest research reports.

148 notes

Paying closer attention to attention

Ellen’s (not her real name) adoptive parents weren’t surprised when the school counselor suggested that she might have attention deficit hyperactivity disorder (ADHD).

Several professionals had made this suggestion over the years. Given that homework led to one explosion after another, and that at school Ellen, who is eleven, spent her days jiggling up and down in her seat, unable to concentrate for more than ten minutes, it seemed a reasonable assumption. Yet her parents always felt that ADHD didn’t quite capture the extent of Ellen’s issues over the years. Fortunately the school counsellor was familiar with fetal alcohol spectrum disorder (FASD). When she learned that Ellen’s birth mother had consumed alcohol during pregnancy, she raised the possibility that Ellen’s problems could be attributable to FASD and referred her for further assessment.

It’s a familiar story, and most of us reading about Ellen would assume that she did indeed suffer from ADHD.

But now researchers from McGill have suggested that there may be an overreporting of attention problems in children with FASD, simply because parents and teachers are using a misplaced basis for comparison. They are testing and comparing children with FASD with children of the same physical or chronological age, rather than with children of the same mental age, which is often quite a lot younger.

image

“Because the link between fetal alcohol syndrome and ADHD is so commonly described in the literature, both parents and teachers are more likely to expect these children to have attention problems,” says Prof. Jacob Burack, a professor in McGill’s Dept. of Educational and Counselling Psychology and the senior author on a recent study on the subject. “But what teachers often don’t recognize is that although the child they are dealing with is eleven years old in chronological terms, they are actually functioning at the developmental age of an eight-year old. That’s a pretty big difference. And when you use mental age as the basis of comparison, many of the attention problems that have been described in children with FASD no longer seem of primary importance.”

The researchers recruited children with FASD whose average chronological age was just under twelve years old. But their average mental age, determined by standard tests, was actually closer to nine-and-a-half years old. (The children were recruited through the Asante Centre for Fetal Alcohol Syndrome in British Columbia, and though the number of children studied may appear small, this is a fairly typical size for studies on FASD, given the difficulties of the diagnostic process.)

These children were then compared with children who were developing typically and whose average chronological age was about eight-and-a-half years old and whose average mental age was similar to that of the group of children diagnosed with FASD.

After using tests to measure specific aspects of attention, the researchers then compared the performance of children with FASD on these tests with the results of children of the same mental age. What they found was that while children like Ellen had difficulties with certain kinds of attention skills, notably in terms of shifting attention from one object to another, there were other areas, such as focus, where they had no significant difficulties at all. So, if we were to compare these aspects of attention to a hockey game, typically these children would have no difficulty focusing on the puck in the arena, but would have problems following the puck being passed from one player to another.

This suggests to Dr. Kimberly Lane, the PhD student who conducted the research, that there is a need to develop a more nuanced understanding of attention skills. “We use words like attention loosely, but it’s really an umbrella term that covers various aspects of attending to different people or events or environments,” says Dr. Lane. “By using more complex assessment techniques of various aspects of attention it will be possible to get a better picture of the attention difficulties faced by children with FASD,” she adds.

“But no matter what the tests say, it’s important for teachers and parents to understand that.the difficulties these children have with attention may be less important than their more general problems, and we need to work with them as they are.”

(Source: mcgill.ca)

Filed under attention disorders FASD selective attention attention neuroscience science

81 notes

Controlling Brain Waves to Improve Vision
Have you ever accidently missed a red light or a stop sign? Or have you  heard someone mention a visible event that you passed by but totally missed seeing?
“When we have different things competing for our attention, we can only be aware of so much of what we see,” said Kyle Mathewson, Beckman Institute Postdoctoral Fellow. “For example, when you’re driving, you might really be concentrating on obeying traffic signals.”
But say there’s an unexpected event: an emergency vehicle, a pedestrian, or an animal running into the road—will you actually see the unexpected, or will you be so focused on your initial task that you don’t notice?
“In the car, we may see something so brief or so faint, while we’re paying attention to something else, that the event won’t come into our awareness,” says Mathewson. “If you present this scenario hundreds of times to someone, sometimes they will see the unexpected event, and sometimes they won’t because their brain is in a different preparation state.”
By using a novel technique to test brain waves, Mathewson and colleagues are discovering how the brain processes external stimuli that do and don’t reach our awareness. A paper about their results, “Dynamics of Alpha Control: Preparatory Suppression of Posterior Alpha Oscillations by Frontal Modulators Revealed with Combined EEG and Event-related Optical Signal,” published this month in the Journal of Cognitive Neuroscience, reveals how alpha waves, typically thought of as your brain’s electrical activity while it’s at rest, can actually influence what we see or don’t see.
The researchers used both electroencephalography (EEG) and the event-related optical signal (EROS), developed in the Cognitive Neuroimaging Laboratory of Gabriele Gratton and Monica Fabiani, professors of psychology and members of the Beckman Institute’s Cognitive Neuroscience Group, and authors of the study.
While EEG records the electrical activity along the scalp, EROS uses infrared light passed through optical fibers to measure changes in optical properties in the active areas of the cerebral cortex. Because of the hard skull between the EEG sensors and the brain, it can be difficult to find exactly WHERE signals are produced. EROS, which examines how light is scattered, can noninvasively pinpoint activity within the brain.
“EROS is based on near-infrared light,” explained Fabiani and Gratton via email. “It exploits the fact that when neurons are active, they swell a little, becoming slightly more transparent to light: this allows us to determine when a particular part of the cortex is processing information, as well as where the activity occurs.”
This allowed the researchers to not only measure activity in the brain, but also allowed them to map where the alpha oscillations were originating. Their discovery: the alpha waves are produced in the cuneus, located in the part of the brain that processes visual information.
The alpha can inhibit what is processed visually, making it hard for you to see something unexpected.
By focusing your attention and concentrating more fully on what you are experiencing, however, the executive function of the brain can come into play and provide “top-down” control—putting a brake on the alpha waves, thus allowing you to see things that you might have missed in a more relaxed state.
“We found that the same brain regions known to control our attention are involved in suppressing the alpha waves and improving our ability to detect hard-to-see targets,” said Diane Beck, a member of the Beckman’s Cognitive Neuroscience Group, and one of the study’s authors.
“Knowing where the waves originate means we can target that area specifically with electrical stimulation” said Mathewson. “Or we can also give people moment-to-moment feedback, which could be used to alert drivers that they are not paying attention and should increase their focus on the road ahead, or in other situations alert students in a classroom that they need to focus more, or athletes, or pilots and equipment operators.”
The study examined 16 subjects and mapped the electrical and optical data onto individual MRI brain images.

Controlling Brain Waves to Improve Vision

Have you ever accidently missed a red light or a stop sign? Or have you  heard someone mention a visible event that you passed by but totally missed seeing?

“When we have different things competing for our attention, we can only be aware of so much of what we see,” said Kyle Mathewson, Beckman Institute Postdoctoral Fellow. “For example, when you’re driving, you might really be concentrating on obeying traffic signals.”

But say there’s an unexpected event: an emergency vehicle, a pedestrian, or an animal running into the road—will you actually see the unexpected, or will you be so focused on your initial task that you don’t notice?

“In the car, we may see something so brief or so faint, while we’re paying attention to something else, that the event won’t come into our awareness,” says Mathewson. “If you present this scenario hundreds of times to someone, sometimes they will see the unexpected event, and sometimes they won’t because their brain is in a different preparation state.”

By using a novel technique to test brain waves, Mathewson and colleagues are discovering how the brain processes external stimuli that do and don’t reach our awareness. A paper about their results, “Dynamics of Alpha Control: Preparatory Suppression of Posterior Alpha Oscillations by Frontal Modulators Revealed with Combined EEG and Event-related Optical Signal,” published this month in the Journal of Cognitive Neuroscience, reveals how alpha waves, typically thought of as your brain’s electrical activity while it’s at rest, can actually influence what we see or don’t see.

The researchers used both electroencephalography (EEG) and the event-related optical signal (EROS), developed in the Cognitive Neuroimaging Laboratory of Gabriele Gratton and Monica Fabiani, professors of psychology and members of the Beckman Institute’s Cognitive Neuroscience Group, and authors of the study.

While EEG records the electrical activity along the scalp, EROS uses infrared light passed through optical fibers to measure changes in optical properties in the active areas of the cerebral cortex. Because of the hard skull between the EEG sensors and the brain, it can be difficult to find exactly WHERE signals are produced. EROS, which examines how light is scattered, can noninvasively pinpoint activity within the brain.

“EROS is based on near-infrared light,” explained Fabiani and Gratton via email. “It exploits the fact that when neurons are active, they swell a little, becoming slightly more transparent to light: this allows us to determine when a particular part of the cortex is processing information, as well as where the activity occurs.”

This allowed the researchers to not only measure activity in the brain, but also allowed them to map where the alpha oscillations were originating. Their discovery: the alpha waves are produced in the cuneus, located in the part of the brain that processes visual information.

The alpha can inhibit what is processed visually, making it hard for you to see something unexpected.

By focusing your attention and concentrating more fully on what you are experiencing, however, the executive function of the brain can come into play and provide “top-down” control—putting a brake on the alpha waves, thus allowing you to see things that you might have missed in a more relaxed state.

“We found that the same brain regions known to control our attention are involved in suppressing the alpha waves and improving our ability to detect hard-to-see targets,” said Diane Beck, a member of the Beckman’s Cognitive Neuroscience Group, and one of the study’s authors.

“Knowing where the waves originate means we can target that area specifically with electrical stimulation” said Mathewson. “Or we can also give people moment-to-moment feedback, which could be used to alert drivers that they are not paying attention and should increase their focus on the road ahead, or in other situations alert students in a classroom that they need to focus more, or athletes, or pilots and equipment operators.”

The study examined 16 subjects and mapped the electrical and optical data onto individual MRI brain images.

Filed under brain activity brainwaves neural activity EROS EEG visual cortex alpha oscillations neuroscience science

46 notes

Researchers Pinpoint Protein Crucial For Development Of Biological Rhythms In Mice

Johns Hopkins researchers report that they have identified a protein essential to the formation of the tiny brain region in mice that coordinates sleep-wake cycles and other so-called circadian rhythms.

image

(Image caption: An illustration of the activity patterns of normal mice (left). An illustration of the activity patterns mice whose “master clock,” or SCN, has been disrupted (right). Credit: Cell Reports, Bedont et al.)

By disabling the gene for that key protein in test animals, the scientists were able to home in on the mechanism by which that brain region, known as the suprachiasmatic nucleus or SCN, becomes the body’s master clock while the embryo is developing.

The results of their experiments, reported in the tk issue of Cell Reports, are an important step toward understanding how to better manage the disruptive effects experienced by shift workers, as well as treatment of people with sleep disorders, the researchers say.

“Shift workers tend to have higher rates of diabetes, obesity, depression and cancer. Many researchers think that’s somehow connected to their irregular circadian rhythms, and thus to the SCN,” says Seth Blackshaw, Ph.D., an associate professor in the Department of Neuroscience and the Institute for Cell Engineering at the Johns Hopkins University School of Medicine. “Our new research will help us and other researchers isolate the specific impacts of the SCN on mammalian health.”

Blackshaw explains that every cell in the body has its own “clock” that regulates aspects such as its rate of energy use. The SCN is the master clock that synchronizes these individual timekeepers so that, for example, people feel sleepy at night and alert during the day, are hungry at mealtimes, and are prepared for the energy influx that hits fat cells after eating. “A unique property of the SCN is that if its cells are grown in a dish, they quickly synchronize their clocks with each another,” Blackshaw says.

But while evidence like this gave researchers an idea of the SCN’s importance, they hadn’t completely teased its role apart from that of the body’s other clocks, or from other parts of the brain.

The Johns Hopkins team looked for ways to knock down SCN function by targeting and disabling certain genes that disrupt only the formation of the SCN clock. They analyzed which genes were active in different areas of developing mouse brains to identify those that were “turned on” only in the SCN. One of the “hits” was Lhx1, a member of a family of genes whose protein products affect development by controlling the activity of other genes. When the researchers turned off Lhx1 in the SCN of mouse embryos, the grown mice lacked distinctive biochemical signatures seen in the SCN of normal mice.

The genetically modified mice behaved differently, too. Some fell into a pattern of two to three separate cycles of sleep and activity per day, in contrast to the single daily cycle found in normal mice, while others’ rhythms were completely disorganized, Blackshaw says. Though an SCN is present in mutant mice, it communicates poorly with clocks elsewhere in the body.

Blackshaw says he expects that the mutant mice will prove a useful tool in finding whether disrupted signaling from the SCN actually leads to the health problems that shift workers experience, and if so, how this might happen. Although mouse models do not correlate fully to human disease, their biochemical and genetic makeup is closely aligned.

Blackshaw’s team also plans to continue studying the biochemical chain of events surrounding the Lhx1 protein to determine which proteins turn the Lhx1 gene on and which genes it, in turn, directly switches on or off. Those genes could be at the root of inherited sleep disorders, Blackshaw says, and the proteins they make could prove useful as starting points for the development of new drugs to treat insomnia and even jet lag.

Filed under circadian rhythms suprachiasmatic nucleus neuropeptides lhx1 neuroscience science

48 notes

Fruitfly Study Identifies Brain Circuit that Drives Daily Cycles of Rest, Activity
Amita Sehgal, PhD, a professor of Neuroscience at the Perelman School of Medicine, University of Pennsylvania, describes in Cell a circuit in the brain of fruit flies that controls their daily, rhythmic behavior of rest and activity. The new study also found that the fly version of the human brain protein known as corticotrophin releasing factor (CRF) is a major coordinating molecule in this circuit. Fly CRF, called DH44, is required for rest/activity cycles and is produced in cells that receive input from the clock cells in the fly brain. In mammals, CRF is secreted rhythmically and it drives the expression of glucocorticoids such as cortisol and is associated with stress and anxiety.
Animal models like flies are helping to fill gaps in current knowledge about how the brain works, notes Sehgal. Indeed, she says, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN), initiative, a project of the National Institutes of Health, includes the study of simple animal models, which are expected to provide more detailed insight into brain function.
Though much is known about the cellular and molecular components of the clock, the connections that link clock cells to overt behaviors, such as rest/activity behavior, have not been identified. “This study is essentially a map-of-the-circuitry experiment,” says Sehgal, who is also an investigator with the Howard Hughes Medical Institute (HHMI). Like humans, flies are active during the day — walking, flying, feeding and mating — and spend most of the night asleep.
“We conducted a screen for circadian-relevant neurons in the flybrain and found that cells of the pars intercerebralis — the fly version of the mammalian hypothalamus — comprise an important component of the circadian output pathway for rest/activity rhythms in flies,” Sehgal says. The mammalian hypothalamus is a neuroendocrine structure that regulates sleep, circadian rhythms, feeding and, metabolism.   
The Penn team did a random targeting of cells, activating neuronal firing with a transgene designed for this purpose, to see which cells are important in the rest/active behavior. They found that cells in the pars intercerebralis (PI) are essential for rhythmic behavior, and PI cells are connected to the clock cells through a circuit of at least two synapses.
Molecular profiling of PI cells identified the fly version of DH44 as a circadian molecule that is specifically expressed by PI neurons and required for normal rest/activity rhythms in flies. And, when the scientists selectively activated or removed just six PI cells positive for DH44, the fly’s activity cycles became irregular. In other words, the flies no longer restricted their sleep to the dark and their activity to the light, but instead showed more random distribution of these behaviors

Fruitfly Study Identifies Brain Circuit that Drives Daily Cycles of Rest, Activity

Amita Sehgal, PhD, a professor of Neuroscience at the Perelman School of Medicine, University of Pennsylvania, describes in Cell a circuit in the brain of fruit flies that controls their daily, rhythmic behavior of rest and activity. The new study also found that the fly version of the human brain protein known as corticotrophin releasing factor (CRF) is a major coordinating molecule in this circuit. Fly CRF, called DH44, is required for rest/activity cycles and is produced in cells that receive input from the clock cells in the fly brain. In mammals, CRF is secreted rhythmically and it drives the expression of glucocorticoids such as cortisol and is associated with stress and anxiety.

Animal models like flies are helping to fill gaps in current knowledge about how the brain works, notes Sehgal. Indeed, she says, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN), initiative, a project of the National Institutes of Health, includes the study of simple animal models, which are expected to provide more detailed insight into brain function.

Though much is known about the cellular and molecular components of the clock, the connections that link clock cells to overt behaviors, such as rest/activity behavior, have not been identified. “This study is essentially a map-of-the-circuitry experiment,” says Sehgal, who is also an investigator with the Howard Hughes Medical Institute (HHMI). Like humans, flies are active during the day — walking, flying, feeding and mating — and spend most of the night asleep.

“We conducted a screen for circadian-relevant neurons in the flybrain and found that cells of the pars intercerebralis — the fly version of the mammalian hypothalamus — comprise an important component of the circadian output pathway for rest/activity rhythms in flies,” Sehgal says. The mammalian hypothalamus is a neuroendocrine structure that regulates sleep, circadian rhythms, feeding and, metabolism.   

The Penn team did a random targeting of cells, activating neuronal firing with a transgene designed for this purpose, to see which cells are important in the rest/active behavior. They found that cells in the pars intercerebralis (PI) are essential for rhythmic behavior, and PI cells are connected to the clock cells through a circuit of at least two synapses.

Molecular profiling of PI cells identified the fly version of DH44 as a circadian molecule that is specifically expressed by PI neurons and required for normal rest/activity rhythms in flies. And, when the scientists selectively activated or removed just six PI cells positive for DH44, the fly’s activity cycles became irregular. In other words, the flies no longer restricted their sleep to the dark and their activity to the light, but instead showed more random distribution of these behaviors

Filed under fruit flies clock cells circadian clock DH44 animal model neuroscience science

119 notes

Genetic legacy from the Ottoman Empire: Single mutation causes rare brain disorder

An international team of researchers have identified a previously unknown neurodegenerative disorder and discovered it is caused by a single mutation in one individual born during the height of the Ottoman Empire in Turkey about 16 generations ago.

image

(Image caption: An fMRI scan of the brain of a patient with CLP1 mutation reveals severe atrophy of the brainstem (red line) and cerebellum (blue) as well as lack of formation of the corpus callosum (green), which connects both sides of the cerebrum (yellow), which is also atrophied. The lines outline approximately the expected sizes of the brain areas. A study traced the mutation to a single individual born in Turkey during the Ottoman Empire, some 16 generations ago.)

The genetic cause of the rare disorder was discovered during a massive analysis of the individual genomes of thousands of Turkish children suffering from neurological disorders.

“The more we learn about basic mechanisms behind rare forms of neuro-degeneration, the more novel insights we can gain into more common diseases such as Alzheimer’s or Lou Gehrig’s Disease,” said Murat Gunel, the Nixdorff-German Professor of Neurosurgery, and professor of genetics and neurobiology at Yale.

Gunel is a senior co-author of one of two papers published in the April 24 issue of the journal Cell that document the devastating effects of a mutation in the CLP1 gene. Gunel and colleagues at Yale Center for Mendelian Genomics along with Joseph Gleeson’s group at University of California-San Diego compared DNA sequencing results of more than 2,000 children from different families with neurodevelopmental disorders. In four apparently unrelated families, they identified the exact same mutation in the CLP1 gene. Working with the Frank Bass group from the Netherlands, the researchers also studied how CLP1 mutations interfered with the transfer of information encoded within genes to cells’ protein-making machinery.

The discovery of the identical mutation in seemingly unrelated families originally from eastern Turkey suggested an ancestral mutation, dating back several generations, noted the researchers.

Affected children suffer from intellectual disability, seizures, and delayed or absent mental and motor development, and their imaging studies show atrophy affecting the cerebral cortex, cerebellum, and the brain stem.

The second Cell paper by researchers from Baylor School of Medicine and Austria also found the identical founder mutation in CLP1 in another 11 children from an additional five families originally from eastern Turkey.

Gunel said that the high prevalence of consanguineous marriages [between closely related people] in Turkey and the Middle East leads to these rare recessive genetic neurodegenerative disorders. Affected children inherit mutations in the same gene from both of their parents, who are closely related to each other, such as first cousins. Without consanguinity between parents, children are very unlikely to inherit two mutations in the same gene.

“By dissecting the genetic basis of these neurodevelopmental disorders, we are gaining fundamental insight into basic physiological mechanisms important for human brain development and function” Gunel said. “We learn a lot about normal biology by studying what happens when things go wrong.”

(Source: news.yale.edu)

Filed under neurodegeneration genetics CLP1 cerebral cortex cerebellum gene mutations neuroscience science

136 notes

Oops! Researchers find neural signature for mistake correction
Culminating an 8 year search, scientists at the RIKEN-MIT Center for Neural Circuit Genetics captured an elusive brain signal underlying memory transfer and, in doing so, pinpointed the first neural circuit for “oops”—the precise moment when one becomes consciously aware of a self-made mistake and takes corrective action.

The findings, published in Cell, verified a 20 year old hypothesis on how brain areas communicate. In recent years, researchers have been pursuing a class of ephemeral brain signals called gamma oscillations, millisecond scale bursts of synchronized wave-like electrical activity that pass through brain tissue like ripples on a pond. In 1993, German scientist Wolf Singer proposed that gamma waves enable binding of memory associations. For example, in a process called working memory, animals store and recall short-term memory associations when exploring the environment.
In 2006, the MIT team under the direction of Nobel Laureate Susumu Tonegawa began a study to understand working memory in mice. They trained animals to navigate a T maze and turn left or right at a junction for an associated food reward. They found that working memory required communication between two brain areas, the hippocampus and entorhinal cortex, but how mice knew the correct direction and the neural signal for memory transfer of this event remained unclear.
The study’s lead author Jun Yamamoto noticed that mice sometimes made mistakes, turning in the wrong direction then pausing, and turning around to go in the correct direction, trials he termed “oops” in his lab notebook. Intrigued, he recorded neural activity in the circuit and observed a burst of gamma waves just before the “oops” moment. He also saw gamma waves when mice chose the correct direction, but not when they failed to choose the correct direction or did not correct their mistakes.
The critical experiment was to block gamma oscillations and prevent mice from making correct decisions. To do this, the researchers created a transgenic mouse with a light-activated protein called archaerhodopsin (ArchT) in the hippocampus. Using an optic fiber implanted in the brain, light was flashed into the hippocampal-entorhinal circuit, shutting off gamma activity. In accord, the mice could no longer accurately choose the right direction and the number of “oops” events decreased.
The findings provide strong evidence of a role for gamma oscillations in cognition, and raise the prospect of their involvement in other behaviors requiring retrieval and evaluation of working memory. This may open the door to a class of behaviors called metacognition, or “thinking about thinking”, the self-monitoring of one’s actions. Regarding the appearance of gamma oscillations in the “oops” cases, Dr. Tonegawa stated “our data suggest that animals consciously monitor whether their behavioral choices are correct and use memory recall to improve their outcomes”

Oops! Researchers find neural signature for mistake correction

Culminating an 8 year search, scientists at the RIKEN-MIT Center for Neural Circuit Genetics captured an elusive brain signal underlying memory transfer and, in doing so, pinpointed the first neural circuit for “oops”—the precise moment when one becomes consciously aware of a self-made mistake and takes corrective action.

The findings, published in Cell, verified a 20 year old hypothesis on how brain areas communicate. In recent years, researchers have been pursuing a class of ephemeral brain signals called gamma oscillations, millisecond scale bursts of synchronized wave-like electrical activity that pass through brain tissue like ripples on a pond. In 1993, German scientist Wolf Singer proposed that gamma waves enable binding of memory associations. For example, in a process called working memory, animals store and recall short-term memory associations when exploring the environment.

In 2006, the MIT team under the direction of Nobel Laureate Susumu Tonegawa began a study to understand working memory in mice. They trained animals to navigate a T maze and turn left or right at a junction for an associated food reward. They found that working memory required communication between two brain areas, the hippocampus and entorhinal cortex, but how mice knew the correct direction and the neural signal for memory transfer of this event remained unclear.

The study’s lead author Jun Yamamoto noticed that mice sometimes made mistakes, turning in the wrong direction then pausing, and turning around to go in the correct direction, trials he termed “oops” in his lab notebook. Intrigued, he recorded neural activity in the circuit and observed a burst of gamma waves just before the “oops” moment. He also saw gamma waves when mice chose the correct direction, but not when they failed to choose the correct direction or did not correct their mistakes.

The critical experiment was to block gamma oscillations and prevent mice from making correct decisions. To do this, the researchers created a transgenic mouse with a light-activated protein called archaerhodopsin (ArchT) in the hippocampus. Using an optic fiber implanted in the brain, light was flashed into the hippocampal-entorhinal circuit, shutting off gamma activity. In accord, the mice could no longer accurately choose the right direction and the number of “oops” events decreased.

The findings provide strong evidence of a role for gamma oscillations in cognition, and raise the prospect of their involvement in other behaviors requiring retrieval and evaluation of working memory. This may open the door to a class of behaviors called metacognition, or “thinking about thinking”, the self-monitoring of one’s actions. Regarding the appearance of gamma oscillations in the “oops” cases, Dr. Tonegawa stated “our data suggest that animals consciously monitor whether their behavioral choices are correct and use memory recall to improve their outcomes”

Filed under gamma oscillations working memory hippocampus entorhinal cortex memory archaerhodopsin neuroscience science

83 notes

(Image caption: Channelrhodopsins before (upper left) and after (lower right) molecular engineering, shown superimposed over an image of a mammalian neuron. In the upper left opsin, the red color shows negative charges spanning the opsin that facilitated the flow of positive (stimulatory) ions through the channel into neurons. In the newly engineered channels (lower right), those negative charges have been changed to positive (blue), allowing the negatively charged inhibitory chloride ions to flow through. Credit: Andre Berndt, Soo Yeun Lee, Charu Ramakrishnan, and Karl Deisseroth.)
Researchers Build New “Off Switch” to Shut Down Neural Activity
Nearly a decade ago, the era of optogenetics was ushered in with the development of channelrhodopsins, light-activated ion channels that can, with the flick of a switch, instantaneously turn on neurons in which they are genetically expressed. What has lagged behind, however, is the ability to use light to inactivate neurons with an equal level of reliability and efficiency. Now, Howard Hughes Medical Institute (HHMI) scientists have used an analysis of channelrhodopsin’s molecular structure to guide a series of genetic mutations to the ion channel that grant the power to silence neurons with an unprecedented level of control.
The new structurally engineered channel at last gives neuroscientists the tools to both activate and inactivate neurons in deep brain structures using dim pulses of externally projected light. HHMI early career scientist Karl Deisseroth and his colleagues at Stanford University published their findings April 25, 2014 in the journal Science. “We’re excited about this increased light sensitivity of inhibition in part because we think it will greatly enhance work in large-brained organisms like rats and primates,” he says.
First discovered in unicellular green algae in 2002, channelrhodopsins function as photoreceptors that guide the microorganisms’ movements in response to light. In a landmark 2005 study, Deisseroth and his colleagues described a method for expressing the light-sensitive proteins in mouse neurons. By shining a pulse of blue light on those neurons, the researchers showed they could reliably induce the ion channel at channelrhodopsin’s core to open up, allowing positively charged ions to rush into the cell and trigger action potentials. Channelrhodopsins have since been used in hundreds of research projects investigating the neurobiology of everything from cell dynamics to cognitive functions.
A few years later came the deployment of halorhodopsins, light-sensitive proteins selective for the negatively charged ion chloride. These proteins, derived from halobacteria, provided researchers with a tool for the light-controlled inactivation of neurons. A major limitation of these proteins, however, is their inefficiency. Unlike channelrhodopsin, halorhodopsin is an ion pump, meaning that only one chloride ion moves across the neuron’s membrane per photon of light. “What that translates into is you get partial inhibition,” Deisseroth says. “You can inhibit neurons, but in the living animal it’s not always complete.”
Searches for a naturally occurring light-sensitive channel with a pore permeable to negatively charged ions have come up empty handed. “We searched,” Deisseroth says. “We did big genomic searches and found many interesting channelrhodopsins and lots of pumps, but we never found an inhibitory channel in nature.”
The team’s fruitless exploration led them to try modifying the molecular structure of channelrhodopsin so that its pore would shuttle negative ions into the cell. “To do that you need to know what the channel pore looks like at the angstrom level,” Deisseroth says. “What we really needed was the high-resolution crystal structure.” In 2012, working with a group in Japan, Deisseroth and his colleagues captured the structure of a chimera of channelrhodopsin called C1C2 using X-ray crystallography.
A molecular analysis of channelrhodopsin’s pore suggested that swapping out certain negatively charged amino acid residues lining the pore with positive residues could reverse the electrostatic potential of the channel, making it more conductive to negatively charged ions such as chloride. To achieve this molecular switcheroo, the researchers performed dozens of single site-directed mutations. Several mutations conferred selectivity for chloride, but the channels failed to conduct current. So, the team screened hundreds of combinations of mutations. “In a systematic process we found first a combination of four mutations, and then a group of five mutations, that seemed to change selectivity,” says Deisseroth. “We put those together into a nine-fold mutated channel and that one, amazingly, was chloride selective.”
Not only does the new channel—dubbed iC1C2 for “inhibitory C1C2”—allow the selective passage of chloride ions, it greatly reduces the likelihood of action potentials by making the neuron more “leaky,” a function not possible in ion pumps like halorhodopsin.
Deisseroth’s team made a final mutation to a cysteine residue in iC1C2 that makes the channel both bi-stable and orders of magnitude more sensitive to light. When activated by blue light, the mutated channels remain open for up to minutes at a time, while exposing the channels to red light makes them close quickly. This level of long-term control is useful in developmental studies where events play out over minutes to hours. The long channel open times also mean that neurons can essentially integrate chloride currents over longer time scales and, therefore, weaker light can be used to inhibit the neurons. Increased light sensitivity translates to less light-induced damage to neural tissue, the ability to reach deep brain structures, and the possibility of controlling brain functions that involve large regions of the brain.
“This is something we’ve sought for many years and it’s really the culmination of many streams of work in the lab—crystal structure work, mutational work, behavioral work —all of which have come together here,” Deisseroth says.

(Image caption: Channelrhodopsins before (upper left) and after (lower right) molecular engineering, shown superimposed over an image of a mammalian neuron. In the upper left opsin, the red color shows negative charges spanning the opsin that facilitated the flow of positive (stimulatory) ions through the channel into neurons. In the newly engineered channels (lower right), those negative charges have been changed to positive (blue), allowing the negatively charged inhibitory chloride ions to flow through. Credit: Andre Berndt, Soo Yeun Lee, Charu Ramakrishnan, and Karl Deisseroth.)

Researchers Build New “Off Switch” to Shut Down Neural Activity

Nearly a decade ago, the era of optogenetics was ushered in with the development of channelrhodopsins, light-activated ion channels that can, with the flick of a switch, instantaneously turn on neurons in which they are genetically expressed. What has lagged behind, however, is the ability to use light to inactivate neurons with an equal level of reliability and efficiency. Now, Howard Hughes Medical Institute (HHMI) scientists have used an analysis of channelrhodopsin’s molecular structure to guide a series of genetic mutations to the ion channel that grant the power to silence neurons with an unprecedented level of control.

The new structurally engineered channel at last gives neuroscientists the tools to both activate and inactivate neurons in deep brain structures using dim pulses of externally projected light. HHMI early career scientist Karl Deisseroth and his colleagues at Stanford University published their findings April 25, 2014 in the journal Science. “We’re excited about this increased light sensitivity of inhibition in part because we think it will greatly enhance work in large-brained organisms like rats and primates,” he says.

First discovered in unicellular green algae in 2002, channelrhodopsins function as photoreceptors that guide the microorganisms’ movements in response to light. In a landmark 2005 study, Deisseroth and his colleagues described a method for expressing the light-sensitive proteins in mouse neurons. By shining a pulse of blue light on those neurons, the researchers showed they could reliably induce the ion channel at channelrhodopsin’s core to open up, allowing positively charged ions to rush into the cell and trigger action potentials. Channelrhodopsins have since been used in hundreds of research projects investigating the neurobiology of everything from cell dynamics to cognitive functions.

A few years later came the deployment of halorhodopsins, light-sensitive proteins selective for the negatively charged ion chloride. These proteins, derived from halobacteria, provided researchers with a tool for the light-controlled inactivation of neurons. A major limitation of these proteins, however, is their inefficiency. Unlike channelrhodopsin, halorhodopsin is an ion pump, meaning that only one chloride ion moves across the neuron’s membrane per photon of light. “What that translates into is you get partial inhibition,” Deisseroth says. “You can inhibit neurons, but in the living animal it’s not always complete.”

Searches for a naturally occurring light-sensitive channel with a pore permeable to negatively charged ions have come up empty handed. “We searched,” Deisseroth says. “We did big genomic searches and found many interesting channelrhodopsins and lots of pumps, but we never found an inhibitory channel in nature.”

The team’s fruitless exploration led them to try modifying the molecular structure of channelrhodopsin so that its pore would shuttle negative ions into the cell. “To do that you need to know what the channel pore looks like at the angstrom level,” Deisseroth says. “What we really needed was the high-resolution crystal structure.” In 2012, working with a group in Japan, Deisseroth and his colleagues captured the structure of a chimera of channelrhodopsin called C1C2 using X-ray crystallography.

A molecular analysis of channelrhodopsin’s pore suggested that swapping out certain negatively charged amino acid residues lining the pore with positive residues could reverse the electrostatic potential of the channel, making it more conductive to negatively charged ions such as chloride. To achieve this molecular switcheroo, the researchers performed dozens of single site-directed mutations. Several mutations conferred selectivity for chloride, but the channels failed to conduct current. So, the team screened hundreds of combinations of mutations. “In a systematic process we found first a combination of four mutations, and then a group of five mutations, that seemed to change selectivity,” says Deisseroth. “We put those together into a nine-fold mutated channel and that one, amazingly, was chloride selective.”

Not only does the new channel—dubbed iC1C2 for “inhibitory C1C2”—allow the selective passage of chloride ions, it greatly reduces the likelihood of action potentials by making the neuron more “leaky,” a function not possible in ion pumps like halorhodopsin.

Deisseroth’s team made a final mutation to a cysteine residue in iC1C2 that makes the channel both bi-stable and orders of magnitude more sensitive to light. When activated by blue light, the mutated channels remain open for up to minutes at a time, while exposing the channels to red light makes them close quickly. This level of long-term control is useful in developmental studies where events play out over minutes to hours. The long channel open times also mean that neurons can essentially integrate chloride currents over longer time scales and, therefore, weaker light can be used to inhibit the neurons. Increased light sensitivity translates to less light-induced damage to neural tissue, the ability to reach deep brain structures, and the possibility of controlling brain functions that involve large regions of the brain.

“This is something we’ve sought for many years and it’s really the culmination of many streams of work in the lab—crystal structure work, mutational work, behavioral work —all of which have come together here,” Deisseroth says.

Filed under optogenetics channelrhodopsin ion channels neural activity x-ray crystallography neuroscience science

202 notes

Higher Education Associated With Better Recovery From Traumatic Brain Injury

Better-educated people appear to be significantly more likely to recover from a moderate to severe traumatic brain injury (TBI), suggesting that a brain’s “cognitive reserve” may play a role in helping people get back to their previous lives, new Johns Hopkins research shows.

image

The researchers, reporting in the journal Neurology, found that those with the equivalent of at least a college education are seven times more likely than those who didn’t finish high school to be disability-free one year after a TBI serious enough to warrant inpatient time in a hospital and rehabilitation facility.

The findings, while new among TBI investigators, mirror those in Alzheimer’s disease research, in which higher educational attainment — believed to be an indicator of a more active, or more effective, use of the brain’s “muscles” and therefore its cognitive reserve — has been linked to slower progression of dementia.

“After this type of brain injury, some patients experience lifelong disability, while others with very similar damage achieve a full recovery,” says study leader Eric B. Schneider, Ph.D., an epidemiologist at the Johns Hopkins University School of Medicine’s Center for Surgical Trials and Outcomes Research. “Our work suggests that cognitive reserve ¬— the brain’s ability to be resilient in the face of insult or injury — could account for the difference.”

Schneider conducted the research in conjunction with Robert D. Stevens. M.D., a neuro-intensive care physician with Johns Hopkins’ Department of Anesthesiology and Critical Care Medicine.

For the study, the researchers studied 769 patients enrolled in the TBI Model Systems database, an ongoing multi-center cohort of patients funded by the National Institute on Disability and Rehabilitation Research. The patients had been hospitalized with a moderate to severe TBI and subsequently admitted to a rehabilitation facility.

Of the 769 patients, 219 — or 27.8 percent — were free of any detectable disability one year after their injury. Twenty-three patients who didn’t complete high school — 9.7 percent of those at that education level — recovered, while 136 patients with between 12 and 15 years of schooling — 30.8 percent of those at that educational level — did. Nearly 40 percent of patients — 76 of the 194 — who had 16 or more years of education fully recovered.

Schneider says researchers don’t currently understand the biological mechanisms that might account for the link between years of schooling and improved recovery.

“People with increased cognitive reserve capabilities may actually heal in a different way that allows them to return to their pre–injury function and/or they may be able to better adapt and form new pathways in their brains to compensate for the injury,” Schneider says. “Further studies are needed to not only find out, but also to use that knowledge to help people with less cognitive reserve.”

Meanwhile, he says, “What we learned may point to the potential value of continuing to educate yourself and engage in cognitively intensive activities. Just as we try to keep our bodies strong in order to help us recover when we are ill, we need to keep the brain in the best shape it can be.”

Adds Stevens: “Understanding the underpinnings of cognitive reserve in terms of brain biology could generate ideas on how to enhance recovery from brain injury.”

(Source: hopkinsmedicine.org)

Filed under TBI brain injury educational attainment cognitive function cognitive reserve neuroscience science

95 notes

Bionic ear technology used for gene therapy
Researchers at UNSW have for the first time used electrical pulses delivered from a cochlear implant to deliver gene therapy, thereby successfully regrowing auditory nerves.
The research also heralds a possible new way of treating a range of neurological disorders, including Parkinson’s disease, and psychiatric conditions such as depression through this novel way of delivering gene therapy.
The research is published today in the prestigious journal Science Translational Medicine.
“People with cochlear implants do well with understanding speech, but their perception of pitch can be poor, so they often miss out on the joy of music,” says UNSW Professor Gary Housley, who is the senior author of the research paper.
“Ultimately, we hope that after further research, people who depend on cochlear implant devices will be able to enjoy a broader dynamic and tonal range of sound, which is particularly important for our sense of the auditory world around us and for music appreciation,” says Professor Housley, who is also the Director of the Translational Neuroscience Facility at UNSW Medicine.
The research, which has the support of Cochlear Limited through an Australian Research Council Linkage Project grant, has been five years in development.
The work centres on regenerating surviving nerves after age-related or environmental hearing loss, using existing cochlear technology. The cochlear implants are “surprisingly efficient” at localised gene therapy in the animal model, when a few electric pulses are administered during the implant procedure.
“This research breakthrough is important because while we have had very good outcomes with our cochlear implants so far, if we can get the nerves to grow close to the electrodes and improve the connections between them, then we’ll be able to have even better outcomes in the future,” says Jim Patrick, Chief Scientist and Senior Vice-President, Cochlear Limited.
It has long been established that the auditory nerve endings regenerate if neurotrophins – a naturally occurring family of proteins crucial for the development, function and survival of neurons – are delivered to the auditory portion of the inner ear, the cochlea.
But until now, research has stalled because safe, localised delivery of the neurotrophins can’t be achieved using drug delivery, nor by viral-based gene therapy.
Professor Housley and his team at UNSW developed a way of using electrical pulses delivered from the cochlear implant to deliver the DNA to the cells close to the array of implanted  electrodes. These cells then produce neurotrophins.
“No-one had tried to use the cochlear implant itself for gene therapy,” says Professor Housley. “With our technique, the cochlear implant can be very effective for this.”
While the neurotrophin production dropped away after a couple of months, Professor Housley says ultimately the changes in the hearing nerve may be maintained by the ongoing neural activity generated by the cochlear implant.
“We think it’s possible that in the future this gene delivery would only add a few minutes to the implant procedure,” says the paper’s first author, Jeremy Pinyon, whose PhD is based on this work. “The surgeon who installs the device would inject the DNA solution into the cochlea and then fire electrical impulses to trigger the DNA transfer once the implant is inserted.”
Integration of this technology into other ‘bionic’ devices such as electrode arrays used in deep brain stimulation (for the treatment of Parkinson’s disease and depression, for example) could also afford opportunities for safe, directed gene therapy of complex neurological disorders.
"Our work has implications far beyond hearing disorders,” says co-author Associate Professor Matthias Klugmann, from the UNSW Translational Neuroscience Facility research team. “Gene therapy has been suggested as a treatment concept even for devastating neurological conditions and our technology provides a novel platform for safe and efficient gene transfer into tissues as delicate as the brain.”

Bionic ear technology used for gene therapy

Researchers at UNSW have for the first time used electrical pulses delivered from a cochlear implant to deliver gene therapy, thereby successfully regrowing auditory nerves.

The research also heralds a possible new way of treating a range of neurological disorders, including Parkinson’s disease, and psychiatric conditions such as depression through this novel way of delivering gene therapy.

The research is published today in the prestigious journal Science Translational Medicine.

“People with cochlear implants do well with understanding speech, but their perception of pitch can be poor, so they often miss out on the joy of music,” says UNSW Professor Gary Housley, who is the senior author of the research paper.

“Ultimately, we hope that after further research, people who depend on cochlear implant devices will be able to enjoy a broader dynamic and tonal range of sound, which is particularly important for our sense of the auditory world around us and for music appreciation,” says Professor Housley, who is also the Director of the Translational Neuroscience Facility at UNSW Medicine.

The research, which has the support of Cochlear Limited through an Australian Research Council Linkage Project grant, has been five years in development.

The work centres on regenerating surviving nerves after age-related or environmental hearing loss, using existing cochlear technology. The cochlear implants are “surprisingly efficient” at localised gene therapy in the animal model, when a few electric pulses are administered during the implant procedure.

“This research breakthrough is important because while we have had very good outcomes with our cochlear implants so far, if we can get the nerves to grow close to the electrodes and improve the connections between them, then we’ll be able to have even better outcomes in the future,” says Jim Patrick, Chief Scientist and Senior Vice-President, Cochlear Limited.

It has long been established that the auditory nerve endings regenerate if neurotrophins – a naturally occurring family of proteins crucial for the development, function and survival of neurons – are delivered to the auditory portion of the inner ear, the cochlea.

But until now, research has stalled because safe, localised delivery of the neurotrophins can’t be achieved using drug delivery, nor by viral-based gene therapy.

Professor Housley and his team at UNSW developed a way of using electrical pulses delivered from the cochlear implant to deliver the DNA to the cells close to the array of implanted  electrodes. These cells then produce neurotrophins.

“No-one had tried to use the cochlear implant itself for gene therapy,” says Professor Housley. “With our technique, the cochlear implant can be very effective for this.”

While the neurotrophin production dropped away after a couple of months, Professor Housley says ultimately the changes in the hearing nerve may be maintained by the ongoing neural activity generated by the cochlear implant.

“We think it’s possible that in the future this gene delivery would only add a few minutes to the implant procedure,” says the paper’s first author, Jeremy Pinyon, whose PhD is based on this work. “The surgeon who installs the device would inject the DNA solution into the cochlea and then fire electrical impulses to trigger the DNA transfer once the implant is inserted.”

Integration of this technology into other ‘bionic’ devices such as electrode arrays used in deep brain stimulation (for the treatment of Parkinson’s disease and depression, for example) could also afford opportunities for safe, directed gene therapy of complex neurological disorders.

"Our work has implications far beyond hearing disorders,” says co-author Associate Professor Matthias Klugmann, from the UNSW Translational Neuroscience Facility research team. “Gene therapy has been suggested as a treatment concept even for devastating neurological conditions and our technology provides a novel platform for safe and efficient gene transfer into tissues as delicate as the brain.”

Filed under bionic ear hearing loss gene therapy cochlear implants regeneration neuroscience science

145 notes

(Image caption: A solar flare erupts on the far right side of the sun, in this image captured by NASA’s Solar Dynamics Observatory. The flare peaked at 6:34 p.m. EDT on March 12, 2014. Credit: NASA)
Some Astronauts at Risk for Cognitive Impairment
Johns Hopkins scientists report that rats exposed to high-energy particles, simulating conditions astronauts would face on a long-term deep space mission, show lapses in attention and slower reaction times, even when the radiation exposure is in extremely low dose ranges.
The cognitive impairments — which affected a large subset, but far from all, of the animals — appear to be linked to protein changes in the brain, the scientists say. The findings, if found to hold true in humans, suggest it may be possible to develop a biological marker to predict sensitivity to radiation’s effects on the human brain before deployment to deep space. The study, funded by NASA’s National Space Biomedical Research Institute, is described in the April issue of the journal Radiation Research.
When astronauts are outside of the Earth’s magnetic field, spaceships provide only limited shielding from radiation exposure, explains study leader Robert D. Hienz, Ph.D., an associate professor of behavioral biology at the Johns Hopkins University School of Medicine. If they take space walks or work outside their vehicles, they will be exposed to the full effects of radiation from solar flares and intergalactic cosmic rays, he says, and since neither the moon nor Mars have a planet-wide magnetic field, astronauts will be exposed to relatively high radiation levels, even when they land on these surfaces.
But not everyone will be affected the same way, his experiments suggest. “In our radiated rats, we found that 40 to 45 percent had these attention-related deficits, while the rest were seemingly unaffected,” Hienz says. “If the same proves true in humans and we can identify those more susceptible to radiation’s effects before they are harmfully exposed, we may be able to mitigate the damage.”
If a biomarker can be identified for humans, it could have even broader implications in determining the best course of treatment for patients receiving radiotherapy for brain tumors or identifying which patients may be more at risk from radiation-based medical treatments, the investigators note.
Previous research has tested how well radiation-exposed rats do with basic learning tasks and mazes, but this new Johns Hopkins study focused on tests that closely mimic the self-tests of fitness for duty currently used by astronauts on the International Space Station prior to mission-critical events such as space walks. Similar fitness tests are also used for soldiers, airline pilots and long-haul truckers.
In one such test, an astronaut sees a blank screen on a handheld device and is instructed to tap the screen when an LED counter lights up. The normal reaction time should be less than 300 milliseconds. The rats in the experiment are similarly taught to touch a light-up key with their noses and are then tested to see how quickly they react.
To conduct the new study, rats were first trained for the tests and then taken to Brookhaven National Laboratory on Long Island in Upton, N.Y., where a collider produces the high-energy proton and heavy ion radiation particles that normally occur in space. The rats’ heads were exposed to varying levels of radiation that astronauts would normally receive during long-duration missions, while other rats were given sham exposures.
Once the rats returned to Johns Hopkins, they were tested every day for 250 days. The radiation-sensitive animals (19 of 46) all showed evidence of impairment that began at 50 to 60 days post–exposure and remained through the end of the study.
Lapses in attention occurred in 64 percent of the sensitive animals, elevations in impulsive responding occurred in 45 percent and slower reaction times occurred in 27 percent. The impairments were not dependent on radiation dose. Additionally, some of the rats didn’t recover at all from their deficits over time, while others showed some recovery over time.
The radiation-sensitive rats that received higher doses of radiation had a higher concentration of transporters for the neurotransmitter dopamine, which plays a role in vigilance and attention, says Catherine M. Davis, Ph.D., a postdoctoral fellow in the Department of Psychiatry and Behavioral Sciences and the study’s first author.
The dopamine transport system appears impaired in radiation-sensitive rats because the neurotransmitter is most likely not removed in the manner it should be for the brain to function properly, she says. Humans with genetic differences related to dopamine transport, she adds, have been shown to do worse on the type of mental fitness tests given to the astronauts and rats alike.
Davis says she wouldn’t want to see radiation-sensitive astronauts kept from future missions to the moon or Mars, but she would want those astronauts to be prepared to take special precautions to protect their brains, such as wearing extra shielding or not performing space walks.“As with other areas of personalized medicine, we would seek to create individual treatment and prevention plans for astronauts we believe would be more susceptible to cognitive deficits from radiation exposure,” she says.
Current astronauts are not as exposed to the damaging effects of radiation, Davis says, because the International Space Station flies in an orbit low enough that the Earth’s magnetic field continues to provide protection.
While the Johns Hopkins team studies the likely effects of radiation on the brain during a deep space mission, other NASA-funded research groups are looking at the potential effects of radiation on other parts of the body and on whether it increases cancer risks.

(Image caption: A solar flare erupts on the far right side of the sun, in this image captured by NASA’s Solar Dynamics Observatory. The flare peaked at 6:34 p.m. EDT on March 12, 2014. Credit: NASA)

Some Astronauts at Risk for Cognitive Impairment

Johns Hopkins scientists report that rats exposed to high-energy particles, simulating conditions astronauts would face on a long-term deep space mission, show lapses in attention and slower reaction times, even when the radiation exposure is in extremely low dose ranges.

The cognitive impairments — which affected a large subset, but far from all, of the animals — appear to be linked to protein changes in the brain, the scientists say. The findings, if found to hold true in humans, suggest it may be possible to develop a biological marker to predict sensitivity to radiation’s effects on the human brain before deployment to deep space. The study, funded by NASA’s National Space Biomedical Research Institute, is described in the April issue of the journal Radiation Research.

When astronauts are outside of the Earth’s magnetic field, spaceships provide only limited shielding from radiation exposure, explains study leader Robert D. Hienz, Ph.D., an associate professor of behavioral biology at the Johns Hopkins University School of Medicine. If they take space walks or work outside their vehicles, they will be exposed to the full effects of radiation from solar flares and intergalactic cosmic rays, he says, and since neither the moon nor Mars have a planet-wide magnetic field, astronauts will be exposed to relatively high radiation levels, even when they land on these surfaces.

But not everyone will be affected the same way, his experiments suggest. “In our radiated rats, we found that 40 to 45 percent had these attention-related deficits, while the rest were seemingly unaffected,” Hienz says. “If the same proves true in humans and we can identify those more susceptible to radiation’s effects before they are harmfully exposed, we may be able to mitigate the damage.”

If a biomarker can be identified for humans, it could have even broader implications in determining the best course of treatment for patients receiving radiotherapy for brain tumors or identifying which patients may be more at risk from radiation-based medical treatments, the investigators note.

Previous research has tested how well radiation-exposed rats do with basic learning tasks and mazes, but this new Johns Hopkins study focused on tests that closely mimic the self-tests of fitness for duty currently used by astronauts on the International Space Station prior to mission-critical events such as space walks. Similar fitness tests are also used for soldiers, airline pilots and long-haul truckers.

In one such test, an astronaut sees a blank screen on a handheld device and is instructed to tap the screen when an LED counter lights up. The normal reaction time should be less than 300 milliseconds. The rats in the experiment are similarly taught to touch a light-up key with their noses and are then tested to see how quickly they react.

To conduct the new study, rats were first trained for the tests and then taken to Brookhaven National Laboratory on Long Island in Upton, N.Y., where a collider produces the high-energy proton and heavy ion radiation particles that normally occur in space. The rats’ heads were exposed to varying levels of radiation that astronauts would normally receive during long-duration missions, while other rats were given sham exposures.

Once the rats returned to Johns Hopkins, they were tested every day for 250 days. The radiation-sensitive animals (19 of 46) all showed evidence of impairment that began at 50 to 60 days post–exposure and remained through the end of the study.

Lapses in attention occurred in 64 percent of the sensitive animals, elevations in impulsive responding occurred in 45 percent and slower reaction times occurred in 27 percent. The impairments were not dependent on radiation dose. Additionally, some of the rats didn’t recover at all from their deficits over time, while others showed some recovery over time.

The radiation-sensitive rats that received higher doses of radiation had a higher concentration of transporters for the neurotransmitter dopamine, which plays a role in vigilance and attention, says Catherine M. Davis, Ph.D., a postdoctoral fellow in the Department of Psychiatry and Behavioral Sciences and the study’s first author.

The dopamine transport system appears impaired in radiation-sensitive rats because the neurotransmitter is most likely not removed in the manner it should be for the brain to function properly, she says. Humans with genetic differences related to dopamine transport, she adds, have been shown to do worse on the type of mental fitness tests given to the astronauts and rats alike.

Davis says she wouldn’t want to see radiation-sensitive astronauts kept from future missions to the moon or Mars, but she would want those astronauts to be prepared to take special precautions to protect their brains, such as wearing extra shielding or not performing space walks.

“As with other areas of personalized medicine, we would seek to create individual treatment and prevention plans for astronauts we believe would be more susceptible to cognitive deficits from radiation exposure,” she says.

Current astronauts are not as exposed to the damaging effects of radiation, Davis says, because the International Space Station flies in an orbit low enough that the Earth’s magnetic field continues to provide protection.

While the Johns Hopkins team studies the likely effects of radiation on the brain during a deep space mission, other NASA-funded research groups are looking at the potential effects of radiation on other parts of the body and on whether it increases cancer risks.

Filed under radiation cognitive impairment dopamine neuroscience science

free counters