In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”






![New respect for primary visual cortex
In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”](http://36.media.tumblr.com/5a1ff63c9691a043b9ac80276d6cd7d4/tumblr_n3cbydNTEv1rog5d1o1_500.jpg)

![EEG study: Brain infers structure, rules of tasks
A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.
In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.
Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.
“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”
Read more](http://41.media.tumblr.com/8a9a32652e45311f2136a6687ae7bc9b/tumblr_n31c2zEUcc1rog5d1o1_500.jpg)

