Neuroscience

Articles and news from the latest research reports.

Posts tagged learning

304 notes

How are Depression and Memory Loss Connected?

Past research has long indicated that depression is a big risk factor for memory loss in aging adults. But it is still unclear exactly how the two issues are related and whether there is potential to slow memory loss by fighting depression.

image

A preliminary study conducted by researchers from the University of Rochester School of Medicine and Dentistry and the School of Nursing, and published in the 42nd edition of Psychoneuroendocrinology in April, delves more deeply into the relationship between depression and memory loss, and how this connection may depend on levels of insulin-like growth factor, or IGF-1.

Prior research has shown that IGF-1, a hormone that helps bolster growth, is important for preserving memory, especially among older adults.

The collaborative study found that people with lower cognitive ability were more likely to have had higher depressive symptoms if they also had low levels of IGF-1. Reversely, participants with high levels of IGF-1 had no link between depressive symptoms and memory.

Senior author Kathi L. Heffner, Ph.D., assistant professor in the School of Medicine and Dentistry’s Department of Psychiatry, had originally examined possible associations between IGF-1 and memory in a sample of 94 healthy older adults, but couldn’t find strong or consistent evidence.

Heffner then approached the study’s lead author Feng (Vankee) Lin, Ph.D, R.N., assistant professor at the School of Nursing, for input because of her expertise in cognitive aging. Lin is a young nurse researcher whose collaborative work focuses on developing multi-model interventions to slow the progression of cognitive decline in at-risk adults, and reduce their risk of developing dementia and Alzheimer’s disease.

“Vankee spearheaded the idea to examine the role of depressive symptoms in these data, which resulted in the interesting link,” Heffner said.

The association discovered between memory loss, depression and IGF-1 means that IGF-1 could be a very promising factor in protecting memory, Lin said.

“IGF-1 is currently a hot topic in terms of how it can promote neuroplasticity and slow cognitive decline,” Lin said. “Depression, memory and the IGF-1 receptor are all located in a brain region which regulates a lot of complicated cognitive ability. As circulating IGF-1 can pass through the blood-brain barrier, it may work to influence the brain in a protective way.”

Lin said more data studies are needed of people with depression symptoms and those with Alzheimer’s disease, but this study opens an important door for further research on the significance of IGF-1 levels in both memory loss and depression.

“It really makes a lot of sense to further develop this study,” Lin said. “If this could be a way to simultaneously tackle depression while preventing cognitive decline it could be a simple intervention to implement.”

Heffner said that clinical trials are underway to determine whether IGF-1 could be an effective therapeutic agent to slow or prevent cognitive decline in people at risk.

“Cognitive decline can also increase risk for depressive symptoms, so if IGF-1 protects people from cognitive decline, this may translate to reduced risk for depression as well,” Heffner said.

(Source: urmc.rochester.edu)

Filed under depression memory loss IGF-1 cognitive decline depressive symptoms learning memory neuroscience science

258 notes

Memory Accuracy and Strength Can Be Manipulated During Sleep
The sense of smell might seem intuitive, almost something you take for granted. But researchers from NYU Langone Medical Center have found that memory of specific odors depends on the ability of the brain to learn, process and recall accurately and effectively during slow-wave sleep — a deep sleep characterized by slow brain waves.
The sense of smell is one of the first things to fail in neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and schizophrenia. Indeed, down the road, if more can be learned from better understanding of how the brain processes odors, researchers believe it could lead to novel therapies that target specific neurons in the brain, perhaps enhancing memory consolidation and memory accuracy.
Reporting in the Journal of Neuroscience online April 9, researchers in the lab of Donald A. Wilson, PhD, a professor in the departments of Child and Adolescent Psychiatry and Neuroscience and Physiology at NYU Langone, and a research scientist at the NYU-affiliated Nathan Kline Institute for Psychiatric Research, showed in experiments with rats that odor memory was strengthened when odors sensed the previous day were replayed during sleep. Memories deepened more when odor reinforcement occurred during sleep than when rats were awake.
When the memory of a specific odor learned when the rats were awake was replayed during slow-wave sleep, they achieved a stronger memory for that odor the next day, compared to rats that received no replay, or only received replay when they were awake.
However, when the research team exposed the rats to replay during sleep of an odor pattern that they had not previously learned, the rats had false memories to many different odors. When the research team pharmacologically prevented neurons from communicating to each other during slow-wave sleep, the accuracy of memory of the odor was also impaired.
The rats were initially trained to recognize odors through conditioning. Using electrodes in the olfactory bulb, a part of the brain responsible for perceiving smells, the researchers stimulated different smell perceptions, according to precise patterns of electrical stimulation. Then, by replaying the patterns electrically, they were able to test the effects of slow-wave sleep manipulation.
Replay of learned electrical odors during slow-wave sleep enhanced the memory for those odors. When the learned smells were replayed while the rats were awake, the strength of the memory decreased. Finally, when a false pattern that the rat never learned was incorporated, the rats could not discriminate the smell accurately from the learned odor.
“Our findings confirm the importance of brain activity during sleep for both memory strength and accuracy,” says Dr. Wilson, the study’s senior author. “What we think is happening is that during slow-wave sleep, neurons in the brain communicate with each other, and in doing so, strengthen their connections, permitting storage of specific information.”
Dr. Wilson says these findings are the first to demonstrate that memory accuracy, not just memory strength, is altered during short-wave sleep. In future research, Dr. Wilson and his team hope to examine how sleep disorders affect memory and perception.

Memory Accuracy and Strength Can Be Manipulated During Sleep

The sense of smell might seem intuitive, almost something you take for granted. But researchers from NYU Langone Medical Center have found that memory of specific odors depends on the ability of the brain to learn, process and recall accurately and effectively during slow-wave sleep — a deep sleep characterized by slow brain waves.

The sense of smell is one of the first things to fail in neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and schizophrenia. Indeed, down the road, if more can be learned from better understanding of how the brain processes odors, researchers believe it could lead to novel therapies that target specific neurons in the brain, perhaps enhancing memory consolidation and memory accuracy.

Reporting in the Journal of Neuroscience online April 9, researchers in the lab of Donald A. Wilson, PhD, a professor in the departments of Child and Adolescent Psychiatry and Neuroscience and Physiology at NYU Langone, and a research scientist at the NYU-affiliated Nathan Kline Institute for Psychiatric Research, showed in experiments with rats that odor memory was strengthened when odors sensed the previous day were replayed during sleep. Memories deepened more when odor reinforcement occurred during sleep than when rats were awake.

When the memory of a specific odor learned when the rats were awake was replayed during slow-wave sleep, they achieved a stronger memory for that odor the next day, compared to rats that received no replay, or only received replay when they were awake.

However, when the research team exposed the rats to replay during sleep of an odor pattern that they had not previously learned, the rats had false memories to many different odors. When the research team pharmacologically prevented neurons from communicating to each other during slow-wave sleep, the accuracy of memory of the odor was also impaired.

The rats were initially trained to recognize odors through conditioning. Using electrodes in the olfactory bulb, a part of the brain responsible for perceiving smells, the researchers stimulated different smell perceptions, according to precise patterns of electrical stimulation. Then, by replaying the patterns electrically, they were able to test the effects of slow-wave sleep manipulation.

Replay of learned electrical odors during slow-wave sleep enhanced the memory for those odors. When the learned smells were replayed while the rats were awake, the strength of the memory decreased. Finally, when a false pattern that the rat never learned was incorporated, the rats could not discriminate the smell accurately from the learned odor.

“Our findings confirm the importance of brain activity during sleep for both memory strength and accuracy,” says Dr. Wilson, the study’s senior author. “What we think is happening is that during slow-wave sleep, neurons in the brain communicate with each other, and in doing so, strengthen their connections, permitting storage of specific information.”

Dr. Wilson says these findings are the first to demonstrate that memory accuracy, not just memory strength, is altered during short-wave sleep. In future research, Dr. Wilson and his team hope to examine how sleep disorders affect memory and perception.

Filed under memory learning olfactory bulb sleep smell perception neuroscience science

131 notes

From Learning in Infancy to Planning Ahead in Adulthood: Sleep’s Vital Role for Memory

Babies and young children make giant developmental leaps all of the time. Sometimes, it seems, even overnight they figure out how to recognize certain shapes or what the word “no” means no matter who says it. It turns out that making those leaps could be a nap away: New research finds that infants who nap are better able to apply lessons learned to new skills, while preschoolers are better able to retain learned knowledge after napping.

image

“Sleep plays a crucial role in learning from early in development,” says Rebecca Gómez of the University of Arizona. She will be presenting her new work, which looks specifically at how sleep enables babies and young children to learn language over time, at the Cognitive Neuroscience Society (CNS) annual meeting in Boston today, as part of a symposium on sleep and memory.

We want to show that sleep is not just a necessary evil for the organism to stay functional,” says Susanne Diekelmann of the University of Tübingen in Germany who is chairing the symposium. “Sleep is an active state that is essential for the formation of lasting memories.”

A growing body of research shows how memories become reactivated during sleep, and new work is shedding light on exactly when and how memories get stored and reactivated. “Sleep is a highly selective state that preferentially strengthens memories that are relevant for our future behavior,” Diekelmann says. “Sleep can also abstract general rules from single experiences, which helps us to deal more efficiently with similar situations in the future.”

Read more

Filed under sleep learning memory infants neuroscience science

136 notes

What songbirds tell us about how we learn

When you throw a wild pitch or sing a flat note, it could be that your basal ganglia made you do it. This area in the middle of the brain is involved in motor control and learning. And one reason for that errant toss or off-key note may be that your brain prompted you to vary your behavior to help you learn, from trial-and-error, to perform better.

image

But how does the brain do this, how does it cause you to vary your behavior?

Along with researchers from the University of California, San Francisco, Indian Institute of Science Education and Research and Duke University, Professor Sarah Woolley, Department of Biology, investigated this question in songbirds, which learn their songs during development in a manner similar to how humans learn to speak. In particular, songbirds memorize the song of their father or tutor, then practice that song until they can produce a similar song.

“As adults, they continue to produce this learned song, but what’s interesting is that they keep it just a little bit variable” says Woolley. “The variability isn’t a default, it isn’t that they can’t produce a better version, they can — in particular when they sing to a female. So when they sing alone and their song is variable it’s because they are actively making it that way.”  

The team used this change in the variability of the song to look at how the activity of single cells in different parts of the brain altered their activity depending on the social environment.

“We found that the social modulation of variability emerged within the basal ganglia, a brain area known to be important for learning and producing movements not only in birds but also in mammals, including humans” says Woolley. “This indicates that one way that the basal ganglia may be important in motor learning across species is through its involvement in generating variability.”

The researchers studied song birds because they have a cortical-basal ganglia circuit that is specific for singing. In contrast, for most behaviors in other species, the cortical-basal ganglia cells and circuits that are important for particular behaviors, like learning to walk, may be situated right next to, or even intermingled with cells and circuits important for other behaviors. “The evolution in songbirds of an identifiable circuit for a single complex behavior gives us a tremendous advantage as we try to parse out exactly what these parts of the brain do and how they do it,” says Woolley.  

Useful for Parkinson’s disease

The basal ganglia is dramatically affected in illnesses such as Parkinson’s and Huntington disease. The team’s findings may eventually be relevant to understanding changes to learning and flexibility in movement that occur in those diseases.  

“These are the kind of questions that we are now starting to pursue in the lab: how variability is affected when you radically manipulate the system akin to what happens during disease”, says Woolley.

(Source: mcgill.ca)

Filed under basal ganglia songbirds learning neurodegenerative diseases variability neuroscience science

256 notes

Dog watch - How attention changes in the course of a dog’s life
Dogs are known to be Man’s best friend. No other pet has adjusted to Man’s lifestyle as this four-legged animal. Scientists at the Messerli Research Institute at the Vetmeduni Vienna, have been the first to investigate the evolution of dogs’ attentiveness in the course of their lives and to what extent they resemble Man in this regard. The outcome: dogs’ attentional and sensorimotor control developmental trajectories are very similar to those found in humans. The results were published in the journal Frontiers in Psychology.
Dogs are individual personalities, possess awareness, and are particularly known for their learning capabilities, or trainability. To learn successfully, they must display a sufficient quantity of attention and concentration. However, the attentiveness of dogs’ changes in the course of their lives, as it does in humans. The lead author Lisa Wallis and her colleagues investigated 145 Border Collies aged 6 months to 14 years in the Clever Dog Lab at the Vetmeduni Vienna and determined, for the first time, how attentiveness changes in the entire course of a dog’s life using a cross-sectional study design.
Humans are more interesting for dogs than objects
To determine how rapidly dogs of various age groups pay attention to objects or humans, the scientists performed two tests. In the first situation the dogs were confronted with a child’s toy suspended suddenly from the ceiling. The scientists measured how rapidly each dog reacted to this occurrence and how quickly the dogs became accustomed to it. Initially all dogs reacted with similar speed to the stimulus, but older dogs lost interest in the toy more rapidly than younger ones did.
In the second test situation, a person known to the dog entered the room and pretended to paint the wall. All dogs reacted by watching the person and the paint roller in the person’s hands for a longer duration than the toy hanging from the ceiling. 
Wallis’ conclusion: “So-called social attentiveness was more pronounced in all dogs than “non-social” attentiveness. The dogs generally tended to react by watching the person with the object for longer than an object on its own. We found that older dogs - like older human beings - demonstrated a certain calmness. They were less affected by new items in the environment and thus showed less interest than younger dogs.”
Selective attention is highest in mid-adulthood
In a further test the scientists investigated so-called selective attention. The dogs participated in an alternating attention task, where they had to perform two tasks consecutively. First, they needed  to find a food reward thrown onto the floor by the experimenter, then after eating the food, the experimenter waited for the dog to establish eye contact with her.  These tasks were repeated for a further twenty trials. The establishment of eye contact was marked by a clicking sound produced by a  “clicker” and small pieces of hot dog were used as a reward. The time spans to find the food and look up into the face were measured. With respect to both time spans, middle-aged dogs (3 to 6 years) reacted most rapidly.
Under these test conditions, sensorimotor abilities were highest among dogs of middle age. Younger dogs fared more poorly probably because of their general lack of experience. Motor abilities in dogs as in humans deteriorate with age. Humans between the age of 20 and 39 years experience a similar peak in sensorimotor abilities,” says Wallis.
Adolescent dogs have the steepest learning curve
Dogs also go through a difficult phase during adolescence (1-2 years) which affects their ability to pay attention. This phase of hormonal change may be compared to puberty in Man. Therefore, young dogs occasionally reacted with some delay to the clicker test. However, Wallis found that adolescent dogs improved their performance more rapidly than other age groups after several repetitions of the clicker test. In other words, the learning curve was found to be steepest in puberty. “Thus, dogs in puberty have great potential for learning and therefore trainability” says Wallis.
Dogs as a model for ADHD and Alzheimer’s disease
As the development of attentiveness in the course of a dog’s life is similar to human development in many respects, dogs make appropriate animal models for various human psychological diseases. For instance, the course of diseases like ADHD (attention deficit/hyperactivity disorder) or Alzheimer’s can be studied by observing the behavior of dogs. In her current project Wallis is investigating the effects of diet on cognition in older dogs together with her colleague Durga Chapagain. The scientists are still looking for dog owners who would like to participate in a long-term study.

Dog watch - How attention changes in the course of a dog’s life

Dogs are known to be Man’s best friend. No other pet has adjusted to Man’s lifestyle as this four-legged animal. Scientists at the Messerli Research Institute at the Vetmeduni Vienna, have been the first to investigate the evolution of dogs’ attentiveness in the course of their lives and to what extent they resemble Man in this regard. The outcome: dogs’ attentional and sensorimotor control developmental trajectories are very similar to those found in humans. The results were published in the journal Frontiers in Psychology.

Dogs are individual personalities, possess awareness, and are particularly known for their learning capabilities, or trainability. To learn successfully, they must display a sufficient quantity of attention and concentration. However, the attentiveness of dogs’ changes in the course of their lives, as it does in humans. The lead author Lisa Wallis and her colleagues investigated 145 Border Collies aged 6 months to 14 years in the Clever Dog Lab at the Vetmeduni Vienna and determined, for the first time, how attentiveness changes in the entire course of a dog’s life using a cross-sectional study design.

Humans are more interesting for dogs than objects

To determine how rapidly dogs of various age groups pay attention to objects or humans, the scientists performed two tests. In the first situation the dogs were confronted with a child’s toy suspended suddenly from the ceiling. The scientists measured how rapidly each dog reacted to this occurrence and how quickly the dogs became accustomed to it. Initially all dogs reacted with similar speed to the stimulus, but older dogs lost interest in the toy more rapidly than younger ones did.

In the second test situation, a person known to the dog entered the room and pretended to paint the wall. All dogs reacted by watching the person and the paint roller in the person’s hands for a longer duration than the toy hanging from the ceiling.

Wallis’ conclusion: “So-called social attentiveness was more pronounced in all dogs than “non-social” attentiveness. The dogs generally tended to react by watching the person with the object for longer than an object on its own. We found that older dogs - like older human beings - demonstrated a certain calmness. They were less affected by new items in the environment and thus showed less interest than younger dogs.”

Selective attention is highest in mid-adulthood

In a further test the scientists investigated so-called selective attention. The dogs participated in an alternating attention task, where they had to perform two tasks consecutively. First, they needed  to find a food reward thrown onto the floor by the experimenter, then after eating the food, the experimenter waited for the dog to establish eye contact with her.  These tasks were repeated for a further twenty trials. The establishment of eye contact was marked by a clicking sound produced by a  “clicker” and small pieces of hot dog were used as a reward. The time spans to find the food and look up into the face were measured. With respect to both time spans, middle-aged dogs (3 to 6 years) reacted most rapidly.

Under these test conditions, sensorimotor abilities were highest among dogs of middle age. Younger dogs fared more poorly probably because of their general lack of experience. Motor abilities in dogs as in humans deteriorate with age. Humans between the age of 20 and 39 years experience a similar peak in sensorimotor abilities,” says Wallis.

Adolescent dogs have the steepest learning curve

Dogs also go through a difficult phase during adolescence (1-2 years) which affects their ability to pay attention. This phase of hormonal change may be compared to puberty in Man. Therefore, young dogs occasionally reacted with some delay to the clicker test. However, Wallis found that adolescent dogs improved their performance more rapidly than other age groups after several repetitions of the clicker test. In other words, the learning curve was found to be steepest in puberty. “Thus, dogs in puberty have great potential for learning and therefore trainability” says Wallis.

Dogs as a model for ADHD and Alzheimer’s disease

As the development of attentiveness in the course of a dog’s life is similar to human development in many respects, dogs make appropriate animal models for various human psychological diseases. For instance, the course of diseases like ADHD (attention deficit/hyperactivity disorder) or Alzheimer’s can be studied by observing the behavior of dogs. In her current project Wallis is investigating the effects of diet on cognition in older dogs together with her colleague Durga Chapagain. The scientists are still looking for dog owners who would like to participate in a long-term study.

Filed under attention learning social attentiveness dogs aging animal model psychology neuroscience science

273 notes

New respect for primary visual cortex



In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

New respect for primary visual cortex

In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.

Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.

“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”

Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.

The value of V1

In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.

For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.

On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.

Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.

Implications for human disease

The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”

“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”

“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”

But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.

“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”

Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.

On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

Filed under primary visual cortex sequence learning learning V1 plasticity neurons neuroscience science

456 notes

Physics-minded crows bring Aesop’s fable to life
Eureka! Like Archimedes in his bath, crows know how to displace water, showing that Aesop’s fable The Crow and the Pitcher isn’t purely fictional.
To see if New Caledonian crows could handle some of the basic principles of volume displacement, Sarah Jelbert at the University of Auckland in New Zealand and her colleagues placed scraps of meat just out of a crow’s reach, floating in a series of tubes that were part-filled with water. Objects potentially useful for bringing up the water level, like stones or heavy rubber erasers, were left nearby.
The crows successfully figured out that heavy and solid objects would help them get a treat faster. They also preferred to drop objects in tubes where they could access a reward more easily, picking out tubes with higher water levels and choosing tubes of water over sand-filled ones.
Read more

Physics-minded crows bring Aesop’s fable to life

Eureka! Like Archimedes in his bath, crows know how to displace water, showing that Aesop’s fable The Crow and the Pitcher isn’t purely fictional.

To see if New Caledonian crows could handle some of the basic principles of volume displacement, Sarah Jelbert at the University of Auckland in New Zealand and her colleagues placed scraps of meat just out of a crow’s reach, floating in a series of tubes that were part-filled with water. Objects potentially useful for bringing up the water level, like stones or heavy rubber erasers, were left nearby.

The crows successfully figured out that heavy and solid objects would help them get a treat faster. They also preferred to drop objects in tubes where they could access a reward more easily, picking out tubes with higher water levels and choosing tubes of water over sand-filled ones.

Read more

Filed under animal cognition learning New Caledonian crows crows reasoning psychology neuroscience science

76 notes

EEG study: Brain infers structure, rules of tasks
A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.
In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.
Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.
“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”
Read more

EEG study: Brain infers structure, rules of tasks

A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.

In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.

Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.

“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”

Read more

Filed under brain activity frontal cortex EEG learning psychology neuroscience science

123 notes

MRI reveals genetic activity
New MIT technique could help decipher genes’ roles in learning and memory
Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.
Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.
“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”
To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.
An on/off switch 
MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.
The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.
The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.
Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.
Exploring brain function
In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.
Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.
“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”
Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

MRI reveals genetic activity

New MIT technique could help decipher genes’ roles in learning and memory

Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.

Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.

“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”

To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.

An on/off switch

MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.

The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.

The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.

Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.

Exploring brain function

In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.

Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.

“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”

Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

Filed under gene expression gene mapping secreted alkaline phosphatase learning memory neuroscience science

291 notes

Electric “thinking cap” controls learning speed
Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?
In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.
The medial-frontal cortex is believed to be the part of the brain responsible for the instinctive “Oops!” response we have when we make a mistake. Previous studies have shown that a spike of negative voltage originates from this area of the brain milliseconds after a person makes a mistake, but not why. Reinhart and Woodman wanted to test the idea that this activity influences learning because it allows the brain to learn from our mistakes. “And that’s what we set out to test: What is the actual function of these brainwaves?” Reinhart said. “We wanted to reach into your brain and causally control your inner critic.”
Reinhart and Woodman set out to test several hypotheses: One, they wanted to establish that it is possible to control the brain’s electrophysiological response to mistakes, and two, that its effect could be intentionally regulated up or down depending on the direction of an electrical current applied to it. This bi-directionality had been observed before in animal studies, but not in humans. Additionally, the researchers set out to see how long the effect lasted and whether the results could be generalized to other tasks.
Stimulating the brain
Using an elastic headband that secured two electrodes conducted by saline-soaked sponges to the cheek and the crown of the head, the researchers applied 20 minutes of transcranial direct current stimulation (tDCS) to each subject. In tDCS, a very mild direct current travels from the anodal electrode, through the skin, muscle, bones and brain, and out through the corresponding cathodal electrode to complete the circuit. “It’s one of the safest ways to noninvasively stimulate the brain,” Reinhart said. The current is so gentle that subjects reported only a few seconds of tingling or itching at the beginning of each stimulation session.
In each of three sessions, subjects were randomly given either an anodal (current traveling from the electrode on the crown of the head to the one on the cheek), cathodal (current traveling from cheek to crown) or a sham condition that replicated the physical tingling sensation under the electrodes without affecting the brain. The subjects were unable to tell the difference between the three conditions.
The learning task
After 20 minutes of stimulation, subjects were given a learning task that involved figuring out by trial and error which buttons on a game controller corresponded to specific colors displayed on a monitor. The task was made more complicated by occasionally displaying a signal for the subject not to respond—sort of like a reverse “Simon Says.” For even more difficulty, they had less than a second to respond correctly, providing many opportunities to make errors—and, therefore, many opportunities for the medial-frontal cortex to fire.
The researchers measured the electrical brain activity of each participant. This allowed them to watch as the brain changed at the very moment participants were making mistakes, and most importantly, allowed them to determine how these brain activities changed under the influence of electrical stimulation.
Controlling the inner critic
When anodal current was applied, the spike was almost twice as large on average and was significantly higher in a majority of the individuals tested (about 75 percent of all subjects across four experiments). This was reflected in their behavior; they made fewer errors and learned from their mistakes more quickly than they did after the sham stimulus. When cathodal current was applied, the researchers observed the opposite result: The spike was significantly smaller, and the subjects made more errors and took longer to learn the task. “So when we up-regulate that process, we can make you more cautious, less error-prone, more adaptable to new or changing situations—which is pretty extraordinary,” Reinhart said. 
The effect was not noticeable to the subjects—their error rates only varied about 4 percent either way, and the behavioral adjustments adjusted by a matter of only 20 milliseconds—but they were plain to see on the EEG. “This success rate is far better than that observed in studies of pharmaceuticals or other types of psychological therapy,” said Woodman.
The researchers found that the effects of a 20-minute stimulation did transfer to other tasks and lasted about five hours.
The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.

Electric “thinking cap” controls learning speed

Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?

In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.

The medial-frontal cortex is believed to be the part of the brain responsible for the instinctive “Oops!” response we have when we make a mistake. Previous studies have shown that a spike of negative voltage originates from this area of the brain milliseconds after a person makes a mistake, but not why. Reinhart and Woodman wanted to test the idea that this activity influences learning because it allows the brain to learn from our mistakes. “And that’s what we set out to test: What is the actual function of these brainwaves?” Reinhart said. “We wanted to reach into your brain and causally control your inner critic.”

Reinhart and Woodman set out to test several hypotheses: One, they wanted to establish that it is possible to control the brain’s electrophysiological response to mistakes, and two, that its effect could be intentionally regulated up or down depending on the direction of an electrical current applied to it. This bi-directionality had been observed before in animal studies, but not in humans. Additionally, the researchers set out to see how long the effect lasted and whether the results could be generalized to other tasks.

Stimulating the brain

Using an elastic headband that secured two electrodes conducted by saline-soaked sponges to the cheek and the crown of the head, the researchers applied 20 minutes of transcranial direct current stimulation (tDCS) to each subject. In tDCS, a very mild direct current travels from the anodal electrode, through the skin, muscle, bones and brain, and out through the corresponding cathodal electrode to complete the circuit. “It’s one of the safest ways to noninvasively stimulate the brain,” Reinhart said. The current is so gentle that subjects reported only a few seconds of tingling or itching at the beginning of each stimulation session.

In each of three sessions, subjects were randomly given either an anodal (current traveling from the electrode on the crown of the head to the one on the cheek), cathodal (current traveling from cheek to crown) or a sham condition that replicated the physical tingling sensation under the electrodes without affecting the brain. The subjects were unable to tell the difference between the three conditions.

The learning task

After 20 minutes of stimulation, subjects were given a learning task that involved figuring out by trial and error which buttons on a game controller corresponded to specific colors displayed on a monitor. The task was made more complicated by occasionally displaying a signal for the subject not to respond—sort of like a reverse “Simon Says.” For even more difficulty, they had less than a second to respond correctly, providing many opportunities to make errors—and, therefore, many opportunities for the medial-frontal cortex to fire.

The researchers measured the electrical brain activity of each participant. This allowed them to watch as the brain changed at the very moment participants were making mistakes, and most importantly, allowed them to determine how these brain activities changed under the influence of electrical stimulation.

Controlling the inner critic

When anodal current was applied, the spike was almost twice as large on average and was significantly higher in a majority of the individuals tested (about 75 percent of all subjects across four experiments). This was reflected in their behavior; they made fewer errors and learned from their mistakes more quickly than they did after the sham stimulus. When cathodal current was applied, the researchers observed the opposite result: The spike was significantly smaller, and the subjects made more errors and took longer to learn the task. “So when we up-regulate that process, we can make you more cautious, less error-prone, more adaptable to new or changing situations—which is pretty extraordinary,” Reinhart said.

The effect was not noticeable to the subjects—their error rates only varied about 4 percent either way, and the behavioral adjustments adjusted by a matter of only 20 milliseconds—but they were plain to see on the EEG. “This success rate is far better than that observed in studies of pharmaceuticals or other types of psychological therapy,” said Woodman.

The researchers found that the effects of a 20-minute stimulation did transfer to other tasks and lasted about five hours.

The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.

Filed under learning executive control transcranial direct current stimulation neuroscience science

free counters