Neuroscience

Articles and news from the latest research reports.

Posts tagged learning

45 notes

Brain Activation in Motor Sequence Learning Is Related to the Level of Native Cortical Excitability
Cortical excitability may be subject to changes through training and learning. Motor training can increase cortical excitability in motor cortex, and facilitation of motor cortical excitability has been shown to be positively correlated with improvements in performance in simple motor tasks. Thus cortical excitability may tentatively be considered as a marker of learning and use-dependent plasticity. Previous studies focused on changes in cortical excitability brought about by learning processes, however, the relation between native levels of cortical excitability on the one hand and brain activation and behavioral parameters on the other is as yet unknown. In the present study we investigated the role of differential native motor cortical excitability for learning a motor sequencing task with regard to post-training changes in excitability, behavioral performance and involvement of brain regions. Our motor task required our participants to reproduce and improvise over a pre-learned motor sequence. Over both task conditions, participants with low cortical excitability (CElo) showed significantly higher BOLD activation in task-relevant brain regions than participants with high cortical excitability (CEhi). In contrast, CElo and CEhi groups did not exhibit differences in percentage of correct responses and improvisation level. Moreover, cortical excitability did not change significantly after learning and training in either group, with the exception of a significant decrease in facilitatory excitability in the CEhi group. The present data suggest that the native, unmanipulated level of cortical excitability is related to brain activation intensity, but not to performance quality. The higher BOLD mean signal intensity during the motor task might reflect a compensatory mechanism in CElo participants.

Brain Activation in Motor Sequence Learning Is Related to the Level of Native Cortical Excitability

Cortical excitability may be subject to changes through training and learning. Motor training can increase cortical excitability in motor cortex, and facilitation of motor cortical excitability has been shown to be positively correlated with improvements in performance in simple motor tasks. Thus cortical excitability may tentatively be considered as a marker of learning and use-dependent plasticity. Previous studies focused on changes in cortical excitability brought about by learning processes, however, the relation between native levels of cortical excitability on the one hand and brain activation and behavioral parameters on the other is as yet unknown. In the present study we investigated the role of differential native motor cortical excitability for learning a motor sequencing task with regard to post-training changes in excitability, behavioral performance and involvement of brain regions. Our motor task required our participants to reproduce and improvise over a pre-learned motor sequence. Over both task conditions, participants with low cortical excitability (CElo) showed significantly higher BOLD activation in task-relevant brain regions than participants with high cortical excitability (CEhi). In contrast, CElo and CEhi groups did not exhibit differences in percentage of correct responses and improvisation level. Moreover, cortical excitability did not change significantly after learning and training in either group, with the exception of a significant decrease in facilitatory excitability in the CEhi group. The present data suggest that the native, unmanipulated level of cortical excitability is related to brain activation intensity, but not to performance quality. The higher BOLD mean signal intensity during the motor task might reflect a compensatory mechanism in CElo participants.

Filed under motor cortex cortical excitability learning brain activation neuroscience science

194 notes

New learning and memory neurons uncovered
A University of Queensland study has identified precisely when new neurons become important for learning.
Lead researcher Dr Jana Vukovic from UQ’s Queensland Brain Institute (QBI) said the study highlighted the importance of new neuron development.
“New neurons are continually produced in the brain, passing through a number of developmental stages before becoming fully mature,” Dr Vukovic said.
“Using a genetic technique to delete immature neurons in animal models, we found they had great difficulty learning a new spatial task.
“There are ways to encourage the production of new neurons – including physical exercise – to improve learning.
“The new neurons appear particularly important for the brain to detect subtle but critical differences in the environment that can impact on the individual.”
The study, performed in QBI Director Professor Perry Bartlett’s laboratory, also demonstrates that immature neurons, born in a region of the brain known as the hippocampus, are required for learning but not for the retrieval of past memories.
“On the other hand, if the animals needed to remember a task they had already mastered in the past, before these immature neurons were deleted, their ability to perform the task was the same – so, they’ve remembered the task they learned earlier,” Dr Vukovic said.
This research allows for better understanding of the processes underlying learning and memory formation. 
(Image Caption: Newly generated neurons doublecortin positive in the dentate gyrus of a degenerating hippocampus in mutant mice lacking the transcription factor TIF-IA. Credit: Rosanna Parlato (AG Schütz, DKFZ-ZMBH Alliance)

New learning and memory neurons uncovered

A University of Queensland study has identified precisely when new neurons become important for learning.

Lead researcher Dr Jana Vukovic from UQ’s Queensland Brain Institute (QBI) said the study highlighted the importance of new neuron development.

“New neurons are continually produced in the brain, passing through a number of developmental stages before becoming fully mature,” Dr Vukovic said.

“Using a genetic technique to delete immature neurons in animal models, we found they had great difficulty learning a new spatial task.

“There are ways to encourage the production of new neurons – including physical exercise – to improve learning.

“The new neurons appear particularly important for the brain to detect subtle but critical differences in the environment that can impact on the individual.”

The study, performed in QBI Director Professor Perry Bartlett’s laboratory, also demonstrates that immature neurons, born in a region of the brain known as the hippocampus, are required for learning but not for the retrieval of past memories.

“On the other hand, if the animals needed to remember a task they had already mastered in the past, before these immature neurons were deleted, their ability to perform the task was the same – so, they’ve remembered the task they learned earlier,” Dr Vukovic said.

This research allows for better understanding of the processes underlying learning and memory formation.

(Image Caption: Newly generated neurons doublecortin positive in the dentate gyrus of a degenerating hippocampus in mutant mice lacking the transcription factor TIF-IA. Credit: Rosanna Parlato (AG Schütz, DKFZ-ZMBH Alliance)

Filed under hippocampus hippocampal neurons memory formation memory learning neuroscience science

276 notes

Researchers shine light on how stress circuits learn
Researchers at the University of Calgary’s Hotchkiss Brain Institute have discovered that stress circuits in the brain undergo profound learning early in life. Using a number of cutting edge approaches, including optogenetics, Jaideep Bains, PhD, and colleagues have shown stress circuits are capable of self-tuning following a single stress. These findings demonstrate that the brain uses stress experience during early life to prepare and optimize for subsequent challenges.
The team was able to show the existence of unique time windows following brief stress challenges during which learning is either increased or decreased. By manipulating specific cellular pathways, they uncovered the key players responsible for learning in stress circuits in an animal model. These discoveries culminated in the publication of two back-to-back studies in the April 7 online edition of Nature Neuroscience [1, 2], one of the world’s top neuroscience journals.
"These new findings demonstrate that systems thought to be ‘hardwired’ in the brain, are in fact flexible, particularly early in life," says Bains, a professor in the Department of Physiology and Pharmacology. "Using this information, researchers can now ask questions about the precise cellular and molecular links between early life stress and stress vulnerability or resilience later in life."
Stress vulnerability, or increased sensitivity to stress, has been implicated in numerous health conditions including cardiovascular disease, obesity, diabetes and depression. Although these studies used animal models, similar mechanisms mediate disease progression in humans.
"Our observations provide an important foundation for designing more effective preventative and therapeutic strategies that mitigate the effects of stress and meet society’s health challenges," he says.

Researchers shine light on how stress circuits learn

Researchers at the University of Calgary’s Hotchkiss Brain Institute have discovered that stress circuits in the brain undergo profound learning early in life. Using a number of cutting edge approaches, including optogenetics, Jaideep Bains, PhD, and colleagues have shown stress circuits are capable of self-tuning following a single stress. These findings demonstrate that the brain uses stress experience during early life to prepare and optimize for subsequent challenges.

The team was able to show the existence of unique time windows following brief stress challenges during which learning is either increased or decreased. By manipulating specific cellular pathways, they uncovered the key players responsible for learning in stress circuits in an animal model. These discoveries culminated in the publication of two back-to-back studies in the April 7 online edition of Nature Neuroscience [1, 2], one of the world’s top neuroscience journals.

"These new findings demonstrate that systems thought to be ‘hardwired’ in the brain, are in fact flexible, particularly early in life," says Bains, a professor in the Department of Physiology and Pharmacology. "Using this information, researchers can now ask questions about the precise cellular and molecular links between early life stress and stress vulnerability or resilience later in life."

Stress vulnerability, or increased sensitivity to stress, has been implicated in numerous health conditions including cardiovascular disease, obesity, diabetes and depression. Although these studies used animal models, similar mechanisms mediate disease progression in humans.

"Our observations provide an important foundation for designing more effective preventative and therapeutic strategies that mitigate the effects of stress and meet society’s health challenges," he says.

Filed under brain optogenetics stress stress vulnerability learning cellular pathways animal model neuroscience science

73 notes

Pesticide combination affects bees’ ability to learn
Two new studies have highlighted a negative impact on bees’ ability to learn following exposure to a combination of pesticides commonly used in agriculture. The researchers found that the pesticides, used in the research at levels shown to occur in the wild, could interfere with the learning circuits in the bee’s brain. They also found that bees exposed to combined pesticides were slower to learn or completely forgot important associations between floral scent and food rewards.
In the study published today (27 March 2013) in Nature Communications, the University of Dundee’s Dr Christopher Connolly and his team investigated the impact on bees’ brains of two common pesticides: pesticides used on crops called neonicotinoid pesticides, and another type of pesticide, coumaphos, that is used in honeybee hives to kill the Varroa mite, a parasitic mite that attacks the honey bee.
The intact bees’ brains were exposed to pesticides in the lab at levels predicted to occur following exposure in the wild and brain activity was recorded. They found that both types of pesticide target the same area of the bee brain involved in learning, causing a loss of function. If both pesticides were used in combination, the effect was greater.
The study is the first to show that these pesticides have a direct impact on pollinator brain physiology. It was prompted by the work of collaborators Dr Geraldine Wright and Dr Sally Williamson at Newcastle University who found that combinations of these same pesticides affected learning and memory in bees. Their studies established that when bees had been exposed to combinations of these pesticides for 4 days, as many as 30% of honeybees failed to learn or performed poorly in memory tests. Again, the experiments mimicked levels that could be seen in the wild, this time by feeding a sugar solution mixed with appropriate levels of pesticides.
Dr Geraldine Wright said: “Pollinators perform sophisticated behaviours while foraging that require them to learn and remember floral traits associated with food. Disruption in this important function has profound implications for honeybee colony survival, because bees that cannot learn will not be able to find food.”
Together the researchers expressed concerns about the use of pesticides that target the same area of the brain of insects and the potential risk of toxicity to non-target insects. Moreover, they said that exposure to different combinations of pesticides that act at this site may increase this risk.
Dr Christopher Connolly said: “Much discussion of the risks posed by the neonicotinoid insecticides has raised important questions of their suitability for use in our environment. However, little consideration has been given to the miticidal pesticides introduced directly into honeybee hives to protect the bees from the Varroa mite. We find that both have negative impact on honeybee brain function.
"Together, these studies highlight potential dangers to pollinators of continued exposure to pesticides that target the insect nervous system and the importance of identifying combinations of pesticides that could profoundly impact pollinator survival."

Pesticide combination affects bees’ ability to learn

Two new studies have highlighted a negative impact on bees’ ability to learn following exposure to a combination of pesticides commonly used in agriculture. The researchers found that the pesticides, used in the research at levels shown to occur in the wild, could interfere with the learning circuits in the bee’s brain. They also found that bees exposed to combined pesticides were slower to learn or completely forgot important associations between floral scent and food rewards.

In the study published today (27 March 2013) in Nature Communications, the University of Dundee’s Dr Christopher Connolly and his team investigated the impact on bees’ brains of two common pesticides: pesticides used on crops called neonicotinoid pesticides, and another type of pesticide, coumaphos, that is used in honeybee hives to kill the Varroa mite, a parasitic mite that attacks the honey bee.

The intact bees’ brains were exposed to pesticides in the lab at levels predicted to occur following exposure in the wild and brain activity was recorded. They found that both types of pesticide target the same area of the bee brain involved in learning, causing a loss of function. If both pesticides were used in combination, the effect was greater.

The study is the first to show that these pesticides have a direct impact on pollinator brain physiology. It was prompted by the work of collaborators Dr Geraldine Wright and Dr Sally Williamson at Newcastle University who found that combinations of these same pesticides affected learning and memory in bees. Their studies established that when bees had been exposed to combinations of these pesticides for 4 days, as many as 30% of honeybees failed to learn or performed poorly in memory tests. Again, the experiments mimicked levels that could be seen in the wild, this time by feeding a sugar solution mixed with appropriate levels of pesticides.

Dr Geraldine Wright said: “Pollinators perform sophisticated behaviours while foraging that require them to learn and remember floral traits associated with food. Disruption in this important function has profound implications for honeybee colony survival, because bees that cannot learn will not be able to find food.”

Together the researchers expressed concerns about the use of pesticides that target the same area of the brain of insects and the potential risk of toxicity to non-target insects. Moreover, they said that exposure to different combinations of pesticides that act at this site may increase this risk.

Dr Christopher Connolly said: “Much discussion of the risks posed by the neonicotinoid insecticides has raised important questions of their suitability for use in our environment. However, little consideration has been given to the miticidal pesticides introduced directly into honeybee hives to protect the bees from the Varroa mite. We find that both have negative impact on honeybee brain function.

"Together, these studies highlight potential dangers to pollinators of continued exposure to pesticides that target the insect nervous system and the importance of identifying combinations of pesticides that could profoundly impact pollinator survival."

Filed under bees pesticides learning brain activity brain function memory neuroscience science

58 notes

Innate ability to vocalize: Deaf or not, courting male mice make same sounds
Scientists have long thought mice might be a model for how humans learn to vocalize. But new research led by scientists at Washington State University Vancouver has found that, unlike humans and songbirds, mice do not learn to vocalize.
The results, published in the Journal of Neuroscience, point the way to a more finely focused, genetic tool for teasing out the mysteries of speech and its disorders.
To see if mice learn to vocalize, WSU neurophysiologist Christine Portfors destroyed the ear hair cells in more than a dozen newborn male mice. The cells convert sound waves into electrical signals processed by the brain, making hearing possible.
The deaf mice were then raised with hearing mice in a normal social environment.
Portfors and her fellow researchers, including WSU graduate student Elena Mahrt, used males because they are particularly exuberant vocalizers in the presence of females.
"We can elicit vocalization behavior in males really easily by just putting them with a female,” Portfors said. "They vocalize like crazy.”
And it turned out that it didn’t matter if the mouse was deaf or not. The researchers catalogued essentially the same suite of ultrasonic sounds from both the deaf and hearing mice. “It means that they don’t need to hear to be able to produce their sounds, their vocalizations,” Portfors said. “Basically, they don’t need to hear themselves. They don’t need auditory feedback. They don’t need to learn.”
The finding means mice are out as a model to study vocal learning. However, scientists can now focus on the mouse to learn the genetic mechanism behind communication disorders.
"If you don’t have learning as a variable, you can look at the genetic control of these things,” Portfors said. "You can look at the genetic control of the output of the signal. It’s not messed up by an animal that’s been in a particular learning situation.”
(Image: Fotolia)

Innate ability to vocalize: Deaf or not, courting male mice make same sounds

Scientists have long thought mice might be a model for how humans learn to vocalize. But new research led by scientists at Washington State University Vancouver has found that, unlike humans and songbirds, mice do not learn to vocalize.

The results, published in the Journal of Neuroscience, point the way to a more finely focused, genetic tool for teasing out the mysteries of speech and its disorders.

To see if mice learn to vocalize, WSU neurophysiologist Christine Portfors destroyed the ear hair cells in more than a dozen newborn male mice. The cells convert sound waves into electrical signals processed by the brain, making hearing possible.

The deaf mice were then raised with hearing mice in a normal social environment.

Portfors and her fellow researchers, including WSU graduate student Elena Mahrt, used males because they are particularly exuberant vocalizers in the presence of females.

"We can elicit vocalization behavior in males really easily by just putting them with a female,” Portfors said. "They vocalize like crazy.”

And it turned out that it didn’t matter if the mouse was deaf or not. The researchers catalogued essentially the same suite of ultrasonic sounds from both the deaf and hearing mice. “It means that they don’t need to hear to be able to produce their sounds, their vocalizations,” Portfors said. “Basically, they don’t need to hear themselves. They don’t need auditory feedback. They don’t need to learn.”

The finding means mice are out as a model to study vocal learning. However, scientists can now focus on the mouse to learn the genetic mechanism behind communication disorders.

"If you don’t have learning as a variable, you can look at the genetic control of these things,” Portfors said. "You can look at the genetic control of the output of the signal. It’s not messed up by an animal that’s been in a particular learning situation.”

(Image: Fotolia)

Filed under vocalization learning vocal learning hair cells animal model genetics neuroscience science

73 notes

Sleep consolidates memories for competing tasks
Sleep plays an important role in the brain’s ability to consolidate learning when two new potentially competing tasks are learned in the same day, research at the University of Chicago demonstrates.
Other studies have shown that sleep consolidates learning for a new task. The new study, which measured starlings’ ability to recognize new songs, shows that learning a second task can undermine the performance of a previously learned task. But this study is the first to show that a good night’s sleep helps the brain retain both new memories.
Starlings provide an excellent model for studying memory because of fundamental biological similarities between avian and mammalian brains, scholars wrote in the paper, “Sleep Consolidation of Interfering Auditory Memories in Starlings,” published in the current online edition of Psychological Science.
“These observations demonstrate that sleep consolidation enhances retention of interfering experiences, facilitating daytime learning and the subsequent formation of stable memories,” the authors wrote.
The paper was written by Timothy Brawn, a graduate researcher in psychology at UChicago; Howard Nusbaum, professor of psychology; and Daniel Margoliash, professor of psychology, organismal biology and anatomy. Nusbaum is a leading expert on learning, and Margoliash is a pioneer in the research of brain function and its development in birds.

Sleep consolidates memories for competing tasks

Sleep plays an important role in the brain’s ability to consolidate learning when two new potentially competing tasks are learned in the same day, research at the University of Chicago demonstrates.

Other studies have shown that sleep consolidates learning for a new task. The new study, which measured starlings’ ability to recognize new songs, shows that learning a second task can undermine the performance of a previously learned task. But this study is the first to show that a good night’s sleep helps the brain retain both new memories.

Starlings provide an excellent model for studying memory because of fundamental biological similarities between avian and mammalian brains, scholars wrote in the paper, “Sleep Consolidation of Interfering Auditory Memories in Starlings,” published in the current online edition of Psychological Science.

“These observations demonstrate that sleep consolidation enhances retention of interfering experiences, facilitating daytime learning and the subsequent formation of stable memories,” the authors wrote.

The paper was written by Timothy Brawn, a graduate researcher in psychology at UChicago; Howard Nusbaum, professor of psychology; and Daniel Margoliash, professor of psychology, organismal biology and anatomy. Nusbaum is a leading expert on learning, and Margoliash is a pioneer in the research of brain function and its development in birds.

Filed under starlings birds consolidation sleep learning memory neuroscience science

1,055 notes

Study indicates reverse impulses clear useless information, prime brain for learning
When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.
The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder—in which the mind has difficulty moving beyond a disturbing experience.
During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.
It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.
"The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important," said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. "These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information."
Their findings appear in the Proceedings of the National Academy of Sciences.
The researchers studied the activity of rats’ brain cells from the hippocampus, a tube-like structure deep in the brain. The hippocampus relays information to and from many other regions of the brain. It plays an important role in memory, orientation, and navigation.
The classic understanding of brain cell activity is that electrical signals travel from dendrites—antenna-like projections at one end of the cell—through the cell body. From the cell body, they then travel the length of the axon, a single long projection at the other end of the cell. This electrical signal stimulates the release of chemicals at the end of the axon, which bind to dendrites on adjacent cells, stimulating these recipient cells to fire electrical signals, and so on. When groups of cells repeatedly fire in this way, the electrical signals increase in intensity.
Dr. Bukalo and her team examined electrical signals that traveled in reverse—from the cell’s axon, to the cell body, and out its many dendrites. This reverse firing happens during sleep and at rest, appearing to reset the cell, the researchers found.
After first stimulating the cells with reverse electrical impulses, the researchers next stimulated the dendrites again with electrical impulses traveling in the forward direction. In response, the neurons generated a stronger signal, with the connections appearing to strengthen with repeated electrical stimulation.
This pattern appears to underlie the formation of new memories. A connection that is reset but never stimulated again may simply fade from use over time, Dr. Bukalo explained. But when a cell is stimulated again, it fires a stronger signal and may be more easily synchronized to the reinforced signals of other brain cells, all of which act in concert over time.

Study indicates reverse impulses clear useless information, prime brain for learning

When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.

The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder—in which the mind has difficulty moving beyond a disturbing experience.

During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.

It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.

"The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important," said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. "These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information."

Their findings appear in the Proceedings of the National Academy of Sciences.

The researchers studied the activity of rats’ brain cells from the hippocampus, a tube-like structure deep in the brain. The hippocampus relays information to and from many other regions of the brain. It plays an important role in memory, orientation, and navigation.

The classic understanding of brain cell activity is that electrical signals travel from dendrites—antenna-like projections at one end of the cell—through the cell body. From the cell body, they then travel the length of the axon, a single long projection at the other end of the cell. This electrical signal stimulates the release of chemicals at the end of the axon, which bind to dendrites on adjacent cells, stimulating these recipient cells to fire electrical signals, and so on. When groups of cells repeatedly fire in this way, the electrical signals increase in intensity.

Dr. Bukalo and her team examined electrical signals that traveled in reverse—from the cell’s axon, to the cell body, and out its many dendrites. This reverse firing happens during sleep and at rest, appearing to reset the cell, the researchers found.

After first stimulating the cells with reverse electrical impulses, the researchers next stimulated the dendrites again with electrical impulses traveling in the forward direction. In response, the neurons generated a stronger signal, with the connections appearing to strengthen with repeated electrical stimulation.

This pattern appears to underlie the formation of new memories. A connection that is reset but never stimulated again may simply fade from use over time, Dr. Bukalo explained. But when a cell is stimulated again, it fires a stronger signal and may be more easily synchronized to the reinforced signals of other brain cells, all of which act in concert over time.

Filed under brain cells PTSD memory learning hippocampus memory formation neuroscience science

225 notes

The Hidden Costs of Cognitive Enhancement
Gentle electrical zaps to the brain can accelerate learning and boost performance on a wide range of mental tasks, scientists have reported in recent years. But a new study suggests there may be a hidden price: Gains in one aspect of cognition may come with deficits in another.
Researchers who study transcranial electrical stimulation, which uses electrodes placed on the scalp, see it as a potentially promising way to enhance cognition in neurological patients, struggling students, and perhaps even ordinary people. Scientists have used it to speed up rehab in people whose speech or movement has been affected by a stroke, and DARPA has studied it as a way to accelerate learning in intelligence analysts or soldiers on the lookout for bad guys and bombs.
Until now, the papers coming out of this field have reported one good-news finding after another.
“This is the first paper to my knowledge to show a cost associated with the gains in cognitive function,” said neuropsychologist Rex Jung of the University of New Mexico, who was not associated with the study. “It’s a really nice demonstration.”
Cognitive neuroscientist Roi Cohen Kadosh of the University of Oxford, who led the study, has been investigating brain stimulation to boost mathematical abilities. He has applied for a patent on a brain stimulator he hopes could help math-challenged students get a better grip on the basics, or even help the mathematically inclined perform even better.
Cohen Kadosh and his colleague Teresa Iuculano investigated 19 volunteers as they learned a new numerical system by trial and error. The new system was based on arbitrary symbols: A cylinder represented the number five, for example, and a triangle represented the number nine. In several training sessions the volunteers viewed pairs of symbols on a computer screen and pressed a key to indicate which one represented a bigger quantity. At first they had to guess, but they eventually learned which symbols corresponded with which numbers.
All of the volunteers wore electrodes on their scalp during these training session. Some received mild electrical stimulation that targeted the posterior parietal cortex, an area implicated in previous studies of numerical cognition. Others received stimulation of the dorsolateral prefrontal cortex, an area involved in a wide range of functions, including learning and memory. A third group received sham stimulation that caused a slight tingling of the skin but no change in brain activity.
Those who had the parietal area involved in numerical cognition stimulated learned the new number system more quickly than those who got sham stimulation, the researchers report in the Journal of Neuroscience. But at the end of the weeklong study their reaction times were slower when they had to put their newfound knowledge to use to solve a new task that they hadn’t seen during the training sessions. ”They had trouble accessing what they’d learned,” Cohen Kadosh said.
The volunteers who had the prefrontal area involved in learning and memory stimulated showed the opposite pattern. They were slower than the control group to learn the new numerical system, but they performed faster on the new test at the end of the experiment. The bottom line, says Cohen Kadosh, is that stimulating either brain region had both benefits and drawbacks. ”Just like with drugs, there seem to be side effects,” he said.
Going forward, Cohen Kadosh says, more work is needed on how to maximize the benefits and minimize the costs of electrical brain stimulation. He thinks the approach has promise, but only when it’s used strategically, by picking the right brain regions to target and stimulating them while a person is training on the skill they want to improve. ”I think it’s going to be useless unless you pair it with some type of cognitive training,” he said.
But that’s not stopping some people from giving it a try on their own. Although it should be obvious that DIY brain stimulation is a bad idea, both Jung and Cohen Kadosh say there seems to be growing interest in the general public in using it for cognitive enhancement.
“There are some do it yourself websites I’ve stumbled across that are pretty frightening,” Jung said. “People are definitely tinkering around with this in their garage.”
The new study suggests one way that could backfire. And that’s not all, said Jung. ”You can burn yourself if nothing else.”

The Hidden Costs of Cognitive Enhancement

Gentle electrical zaps to the brain can accelerate learning and boost performance on a wide range of mental tasks, scientists have reported in recent years. But a new study suggests there may be a hidden price: Gains in one aspect of cognition may come with deficits in another.

Researchers who study transcranial electrical stimulation, which uses electrodes placed on the scalp, see it as a potentially promising way to enhance cognition in neurological patients, struggling students, and perhaps even ordinary people. Scientists have used it to speed up rehab in people whose speech or movement has been affected by a stroke, and DARPA has studied it as a way to accelerate learning in intelligence analysts or soldiers on the lookout for bad guys and bombs.

Until now, the papers coming out of this field have reported one good-news finding after another.

“This is the first paper to my knowledge to show a cost associated with the gains in cognitive function,” said neuropsychologist Rex Jung of the University of New Mexico, who was not associated with the study. “It’s a really nice demonstration.”

Cognitive neuroscientist Roi Cohen Kadosh of the University of Oxford, who led the study, has been investigating brain stimulation to boost mathematical abilities. He has applied for a patent on a brain stimulator he hopes could help math-challenged students get a better grip on the basics, or even help the mathematically inclined perform even better.

Cohen Kadosh and his colleague Teresa Iuculano investigated 19 volunteers as they learned a new numerical system by trial and error. The new system was based on arbitrary symbols: A cylinder represented the number five, for example, and a triangle represented the number nine. In several training sessions the volunteers viewed pairs of symbols on a computer screen and pressed a key to indicate which one represented a bigger quantity. At first they had to guess, but they eventually learned which symbols corresponded with which numbers.

All of the volunteers wore electrodes on their scalp during these training session. Some received mild electrical stimulation that targeted the posterior parietal cortex, an area implicated in previous studies of numerical cognition. Others received stimulation of the dorsolateral prefrontal cortex, an area involved in a wide range of functions, including learning and memory. A third group received sham stimulation that caused a slight tingling of the skin but no change in brain activity.

Those who had the parietal area involved in numerical cognition stimulated learned the new number system more quickly than those who got sham stimulation, the researchers report in the Journal of Neuroscience. But at the end of the weeklong study their reaction times were slower when they had to put their newfound knowledge to use to solve a new task that they hadn’t seen during the training sessions. ”They had trouble accessing what they’d learned,” Cohen Kadosh said.

The volunteers who had the prefrontal area involved in learning and memory stimulated showed the opposite pattern. They were slower than the control group to learn the new numerical system, but they performed faster on the new test at the end of the experiment. The bottom line, says Cohen Kadosh, is that stimulating either brain region had both benefits and drawbacks. ”Just like with drugs, there seem to be side effects,” he said.

Going forward, Cohen Kadosh says, more work is needed on how to maximize the benefits and minimize the costs of electrical brain stimulation. He thinks the approach has promise, but only when it’s used strategically, by picking the right brain regions to target and stimulating them while a person is training on the skill they want to improve. ”I think it’s going to be useless unless you pair it with some type of cognitive training,” he said.

But that’s not stopping some people from giving it a try on their own. Although it should be obvious that DIY brain stimulation is a bad idea, both Jung and Cohen Kadosh say there seems to be growing interest in the general public in using it for cognitive enhancement.

“There are some do it yourself websites I’ve stumbled across that are pretty frightening,” Jung said. “People are definitely tinkering around with this in their garage.”

The new study suggests one way that could backfire. And that’s not all, said Jung. ”You can burn yourself if nothing else.”

Filed under transcranial electrical stimulation cognition cognitive function brain stimulation parietal cortex learning neuroscience science

57 notes

Scientists Identify Buphenyl as a Possible Drug for Alzheimer’s disease
Buphenyl, an FDA-approved medication for hyperammonemia, may protect memory and prevent the progression of Alzheimer’s disease. Hyperammonemia is a life-threatening condition that can affect patients at any age. It is caused by abnormal, high levels of ammonia in the blood.
Studies in mice with Alzheimer’s disease (AD) have shown that sodium phenylbutyrate, known as Buphenyl, successfully increases factors for neuronal growth and protects learning and memory, according to neurological researchers at the Rush University Medical Center.
Results from the National Institutes of Health funded study, recently were published in the Journal of Biological Chemistry.
“Understanding how the disease works is important to developing effective drugs that protect the brain and stop the progression of Alzheimer’s disease,” said Kalipada Pahan, PhD, the Floyd A. Davis professor of neurology at Rush and lead investigator of this study.
A family of proteins known as neurotrophic factors help in survival and function of neurons. Past research indicates that these proteins are drastically decreased in the brain of patients with Alzheimer’s disease (AD).
“Neurotrophic factor proteins could be increased in the brain by direct injection or gene delivery,” said Pahan. “However, using an oral medication to increase the level of these protein may be the best clinical option and a cost effective way to increase the level of these proteins directly in the brain.”
“Our study found that after oral feeding, Buphenyl enters into the brain, increases these beneficial proteins in the brain, protects neurons, and improves memory and learning in mice with AD-like pathology,” said Pahan.
In the brain of a patient with AD, two abnormal structures called plaques and tangles are prime suspects in damaging and killing nerve cells. While neurons die, other brain cells like astroglia do not die.
The study findings indicate that Buphenyl increases neurotrophic factors from astroglia. Buphenyl stimulates memory-related protein CREB (cyclic AMP response element-binding protein) using another protein known as Protein Kinase C (PKC) and increases neurotrophic factors in the brain.
"Now we need to translate this finding to the clinic and test Buphenyl in Alzheimer’s disease patients,” said Pahan. “If these results are replicated in Alzheimer’s disease patients, it would open up a promising avenue of treatment of this devastating neurodegenerative disease.”

Scientists Identify Buphenyl as a Possible Drug for Alzheimer’s disease

Buphenyl, an FDA-approved medication for hyperammonemia, may protect memory and prevent the progression of Alzheimer’s disease. Hyperammonemia is a life-threatening condition that can affect patients at any age. It is caused by abnormal, high levels of ammonia in the blood.

Studies in mice with Alzheimer’s disease (AD) have shown that sodium phenylbutyrate, known as Buphenyl, successfully increases factors for neuronal growth and protects learning and memory, according to neurological researchers at the Rush University Medical Center.

Results from the National Institutes of Health funded study, recently were published in the Journal of Biological Chemistry.

“Understanding how the disease works is important to developing effective drugs that protect the brain and stop the progression of Alzheimer’s disease,” said Kalipada Pahan, PhD, the Floyd A. Davis professor of neurology at Rush and lead investigator of this study.

A family of proteins known as neurotrophic factors help in survival and function of neurons. Past research indicates that these proteins are drastically decreased in the brain of patients with Alzheimer’s disease (AD).

“Neurotrophic factor proteins could be increased in the brain by direct injection or gene delivery,” said Pahan. “However, using an oral medication to increase the level of these protein may be the best clinical option and a cost effective way to increase the level of these proteins directly in the brain.”

“Our study found that after oral feeding, Buphenyl enters into the brain, increases these beneficial proteins in the brain, protects neurons, and improves memory and learning in mice with AD-like pathology,” said Pahan.

In the brain of a patient with AD, two abnormal structures called plaques and tangles are prime suspects in damaging and killing nerve cells. While neurons die, other brain cells like astroglia do not die.

The study findings indicate that Buphenyl increases neurotrophic factors from astroglia. Buphenyl stimulates memory-related protein CREB (cyclic AMP response element-binding protein) using another protein known as Protein Kinase C (PKC) and increases neurotrophic factors in the brain.

"Now we need to translate this finding to the clinic and test Buphenyl in Alzheimer’s disease patients,” said Pahan. “If these results are replicated in Alzheimer’s disease patients, it would open up a promising avenue of treatment of this devastating neurodegenerative disease.”

Filed under alzheimer's disease dementia astroglia learning memory neurons sodium phenylbutyrate neuroscience science

158 notes

To Make Mice Smarter, Add A Few Human Brain Cells
For more than a century, neurons have been the superstars of the brain. Their less glamorous partners, glial cells, can’t send electric signals, and so they’ve been mostly ignored.
Now scientists have injected some human glial cells into the brains of newborn mice. When the mice grew up, they were faster learners. The study, published Thursday in Cell Stem Cell, not only introduces a new tool to study the mechanisms of the human brain, it supports the hypothesis that glial cells — and not just neurons — play an important role in learning.
The scientific obsession with neurons really began at the end of the 19th century. Spanish anatomy professor Santiago Ramon y Cajal used a special dye to stain brain tissue. Under the microscope, neurons were revealed in exquisite detail. “A dense forest,” Ramón y Cajal called it — a field of little branching cells that would soon be named neurons.
With beautiful ink drawings, Ramón y Cajal painstakingly mapped neural networks and slowly developed the theory that neurons are the telegraph lines of thought (an idea later embraced by Schoolhouse Rock). Every idea and memory — every aspect of learning — could be traced back to the electric signals sent between neurons. Ramón y Cajal won the Nobel Prize for his work, and scientists focused on neurons for the next century.
But neurons aren’t the only cells in the brain.
"We’ve overlooked half the brain," says Douglas Fields, a neuroscientist at the National Institutes of Health. "We’ve only been studying one kind of cell in the brain." The other kind of cell — glial cells — are at least as abundant as neurons. But early scientists thought they were so boring they didn’t even merit a singular noun. "Glia is plural — there is no singular," Fields says. "We have ‘neuron’ but we don’t have ‘glion.’ "
Glial cells lacked the ability to send electric signals, and most scientists thought they were housekeeping cells, helping provide nutrients and insulation.
It was only in the last decade or so that scientists realized glial cells were more than that. Special types of glial cells, called astrocytes, which are named for the star-like patterns of their cellular structure, have their own form of chemical signaling. They have the potential to coordinate whole groups of neurons. “Glia are in a position to regulate the flow of information through the brain,” Fields says. “This is all missing from our models.”
And there’s something else. This type of glial cell, these astrocytes, have changed a lot as humans have evolved, while neurons have pretty much stayed the same. A mouse neuron and a human neuron look so much alike, even experienced neuroscientists can’t tell them apart.
"I can’t tell the differences between a neuron from a bird or a mouse or a primate or a human," says Steve Goldman, a neuroscientist at the University of Rochester who has studied brain cells for decades. But Goldman says glial cells are easy to tell apart.
"Human glial cells — human astrocytes — are much larger than those of lower species," he says. "They have more fibers and they send those fibers out over greater distances."
The thought is maybe these glial cells have played a role in making humans smarter. So Goldman teamed up with this wife, Maiken Nedergaard, to test this idea.
They injected some human glial cells into the brains of newborn mice. The mice grew up, and so did the human glial cells. The cells spread through the mouse brain, integrating perfectly with mouse neurons and, in some areas, outnumbering their mouse counterparts. All the while Goldman says the glial cells maintained their human characteristics.
"They very much thought that they were in the human brain, in terms of how they developed and integrated," he says.
So what are these mice like, the ones with brains full of functioning human cells? Their neural circuitry is still just the same, so they act completely normal. They still socialize with other mice and still seem interested in mousey things.
But the researchers say these mice are measurably smarter. In classic maze tests, they learn faster. “They make many fewer errors, and it takes them less time to come to the appropriate answer,” Goldman says.
It might take a normal mouse four or five attempts to learn the correct route, for example. But a mouse with human brain cells could get it on the second try. Glial cells — those boring glial cells — somehow enhance learning.
In fact, they could be changing what it means to be a mouse, and that raises ethical questions for this kind of research.
"Maybe bioethicists have been a little bit too cavalier assuming that a mouse with some human brain cells in it is just your normal old mouse," says Robert Streiffer, a bioethicist from the University of Wisconsin-Madison. "Well, it’s not going to be human, but that doesn’t mean it’s a normal old mouse either."
Streiffer says it’s not just that these mice can get through a maze more quickly — they’re better at recognizing things that scare them. And perception of fear is one of the things bioethicists must weigh when they decide the types of experiments you can do on an animal.
"So you have to sort of step back and do some hardcore philosophy," he says. Like, will these types of human-animal hybrids eventually get close enough to humanity that we would feel uncomfortable performing experiments on them?
The researchers in this study say we’re really, really far from that point. And if you want to investigate the role of glial cells, these hybrid mice are the best tools available.

To Make Mice Smarter, Add A Few Human Brain Cells

For more than a century, neurons have been the superstars of the brain. Their less glamorous partners, glial cells, can’t send electric signals, and so they’ve been mostly ignored.

Now scientists have injected some human glial cells into the brains of newborn mice. When the mice grew up, they were faster learners. The study, published Thursday in Cell Stem Cell, not only introduces a new tool to study the mechanisms of the human brain, it supports the hypothesis that glial cells — and not just neurons — play an important role in learning.

The scientific obsession with neurons really began at the end of the 19th century. Spanish anatomy professor Santiago Ramon y Cajal used a special dye to stain brain tissue. Under the microscope, neurons were revealed in exquisite detail. “A dense forest,” Ramón y Cajal called it — a field of little branching cells that would soon be named neurons.

With beautiful ink drawings, Ramón y Cajal painstakingly mapped neural networks and slowly developed the theory that neurons are the telegraph lines of thought (an idea later embraced by Schoolhouse Rock). Every idea and memory — every aspect of learning — could be traced back to the electric signals sent between neurons. Ramón y Cajal won the Nobel Prize for his work, and scientists focused on neurons for the next century.

But neurons aren’t the only cells in the brain.

"We’ve overlooked half the brain," says Douglas Fields, a neuroscientist at the National Institutes of Health. "We’ve only been studying one kind of cell in the brain." The other kind of cell — glial cells — are at least as abundant as neurons. But early scientists thought they were so boring they didn’t even merit a singular noun. "Glia is plural — there is no singular," Fields says. "We have ‘neuron’ but we don’t have ‘glion.’ "

Glial cells lacked the ability to send electric signals, and most scientists thought they were housekeeping cells, helping provide nutrients and insulation.

It was only in the last decade or so that scientists realized glial cells were more than that. Special types of glial cells, called astrocytes, which are named for the star-like patterns of their cellular structure, have their own form of chemical signaling. They have the potential to coordinate whole groups of neurons. “Glia are in a position to regulate the flow of information through the brain,” Fields says. “This is all missing from our models.”

And there’s something else. This type of glial cell, these astrocytes, have changed a lot as humans have evolved, while neurons have pretty much stayed the same. A mouse neuron and a human neuron look so much alike, even experienced neuroscientists can’t tell them apart.

"I can’t tell the differences between a neuron from a bird or a mouse or a primate or a human," says Steve Goldman, a neuroscientist at the University of Rochester who has studied brain cells for decades. But Goldman says glial cells are easy to tell apart.

"Human glial cells — human astrocytes — are much larger than those of lower species," he says. "They have more fibers and they send those fibers out over greater distances."

The thought is maybe these glial cells have played a role in making humans smarter. So Goldman teamed up with this wife, Maiken Nedergaard, to test this idea.

They injected some human glial cells into the brains of newborn mice. The mice grew up, and so did the human glial cells. The cells spread through the mouse brain, integrating perfectly with mouse neurons and, in some areas, outnumbering their mouse counterparts. All the while Goldman says the glial cells maintained their human characteristics.

"They very much thought that they were in the human brain, in terms of how they developed and integrated," he says.

So what are these mice like, the ones with brains full of functioning human cells? Their neural circuitry is still just the same, so they act completely normal. They still socialize with other mice and still seem interested in mousey things.

But the researchers say these mice are measurably smarter. In classic maze tests, they learn faster. “They make many fewer errors, and it takes them less time to come to the appropriate answer,” Goldman says.

It might take a normal mouse four or five attempts to learn the correct route, for example. But a mouse with human brain cells could get it on the second try. Glial cells — those boring glial cells — somehow enhance learning.

In fact, they could be changing what it means to be a mouse, and that raises ethical questions for this kind of research.

"Maybe bioethicists have been a little bit too cavalier assuming that a mouse with some human brain cells in it is just your normal old mouse," says Robert Streiffer, a bioethicist from the University of Wisconsin-Madison. "Well, it’s not going to be human, but that doesn’t mean it’s a normal old mouse either."

Streiffer says it’s not just that these mice can get through a maze more quickly — they’re better at recognizing things that scare them. And perception of fear is one of the things bioethicists must weigh when they decide the types of experiments you can do on an animal.

"So you have to sort of step back and do some hardcore philosophy," he says. Like, will these types of human-animal hybrids eventually get close enough to humanity that we would feel uncomfortable performing experiments on them?

The researchers in this study say we’re really, really far from that point. And if you want to investigate the role of glial cells, these hybrid mice are the best tools available.

Filed under glial cells cognition progenitor cells neuronal connections learning astrocytes neuroscience science

free counters