Neuroscience

Articles and news from the latest research reports.

Posts tagged primary visual cortex

273 notes

New respect for primary visual cortex



In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.
Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.
“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”
Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.
The value of V1
In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.
For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.
On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.
Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.
Implications for human disease
The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”
“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”
“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”
But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.
“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”
Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.
On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

New respect for primary visual cortex

In the context of learning and memory, the primary visual cortex is the Rodney Dangerfield of cortical areas: It gets no respect. Also known as “V1,” this brain region is the very first place where information from the retina arrives in the cerebral cortex.

Many existing models of visual processing have dismissed V1 as a static filter, capable only of detecting objects’ edges and passively conveying this information to higher-order visual areas that do the hard work of learning, recognition, prediction, and cognition. But a new MIT study brings fresh respect for the lowly visual cortex: Building on growing evidence that V1 can do more than detect edges, neuroscientist Mark Bear and his postdoc Jeffrey Gavornik have shown that V1 is the site of a complex type of learning involving spatial-temporal sequences.

“We rely on spatial-temporal sequence learning for everything we do,” says Bear, the Picower Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and the senior author of the study, which appeared in the March 23 online edition of Nature Neuroscience. “It is how we predict what is coming next so that we can modify our behavior accordingly.”

Sequence learning — or a lack thereof — explains why driving on an unfamiliar road at night, with sparse visual information, is such a white-knuckle experience compared with driving more familiar routes that offer visual cues to predict the road ahead. It is also what allows baseball batters to hit balls traveling too fast to actually see: They do so using visual cues from the pitcher’s throw to predict the arc, trajectory, and timing based on past experience.

The value of V1

In the past decade, researchers have begun to chip away at the view of V1 as an immutable, passive brain region. Studies have shown, for example, that V1 can change in response to experience, a hallmark of plasticity. “Every new discovery allowed us to ask a new question that would have seemed outlandish before,” Bear says.

For the new study, the outlandish question was whether V1 could learn to recognize sequences. To find out, Gavornik designed experiments using gratings of black and white stripes in different orientations — the type of stimuli known to cause responses in V1 neurons. For a training sequence, he showed mice gratings in four different orientations — a combination labeled “ABCD” — in the same order 200 times a day for four days. Control mice saw randomly ordered sequences.

On the fifth day, Gavornik presented the training sequences and random sequences, and measured the V1 neural responses. Among mice that had seen the learned sequence, ABCD, that sequence elicited a more powerful response than unfamiliar sequences — indicating the V1 had changed in response to experience.

Bear then altered the timing of the sequences and found that V1 also detected very precise temporal alternations. That makes sense, he notes: In real life, sequencing and timing are always coupled, so the brain must have a mechanism to respond to this pairing.

Implications for human disease

The most “mind-blowing” results of the study, Bear says, came from experiments testing the neural response when the second visual stimulus, “B,” was replaced with a gray screen following the first stimulus, “A.”

“The primary visual cortex responded as if B were there,” Bear says. “The recordings did not report on what the animal was seeing, but on what the animal was expecting to see.”

“V1 had formed a memory that B follows A, and it used that memory to predict what would happen next, after A,” Gavornik adds. “It is as if the mouse were [acting] based on previously learned visual cues.”

But did the experience-dependent plasticity evident in V1 actually arise there, or did it reflect feedback from a higher brain region that underwent a change? To find out, Gavornik injected a blocker of receptors for the neurotransmitter acetylcholine, which is also known to be important for memory formation in the brain. He found that this treatment prevented learning in the targeted V1 region.

“A disruption in acetylcholine signaling is one of the first things to go wrong in Alzheimer’s disease, and one of the few approved treatments for this disease are drugs that promote the action of acetylcholine,” Bear says. “Our study raises the possibility of using visual sequence learning as a sensitive assay for earlier diagnosis of Alzheimer’s, when therapeutic interventions have a better chance of slowing the disease.”

Spatial-temporal sequence learning is also impaired in schizophrenia and dyslexia, but the origins of this impairment remain a mystery. “When we discover what is going on at a neural and molecular level, maybe we can understand better what happens in human disorders and look for new therapeutic approaches,” Gavornik says.

On a broader scale, the involvement of V1 in higher-level cognitive functions might have intrigued the renowned Spanish neuroscientist (and future Nobel laureate) Santiago Ramón y Cajal, who in 1899 speculated that despite significant heterogeneity, different regions of cortex still follow general principles. “Our study supports Cajal’s theory,” Bear says, “because we show that basic cortical computations may be fundamentally similar in higher and lower regions, even if they are used to serve different functions.”

Filed under primary visual cortex sequence learning learning V1 plasticity neurons neuroscience science

88 notes

Congenitally absent optic chiasm: Making sense of visual pathways
One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.
When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.
By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.
The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.
It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.
The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Congenitally absent optic chiasm: Making sense of visual pathways

One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.

When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.

By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.

The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.

It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.

The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Filed under visual system optic nerves congenital achiasma primary visual cortex neuroscience science

34 notes

Pavlov’s Rats? Rodents Trained to Link Rewards to Visual Cues
In experiments on rats outfitted with tiny goggles, scientists say they have learned that the brain’s initial vision processing center not only relays visual stimuli, but also can “learn” time intervals and create specifically timed expectations of future rewards. The research, by a team at the Johns Hopkins University School of Medicine and the Massachusetts Institute of Technology, sheds new light on learning and memory-making, the investigators say, and could help explain why people with Alzheimer’s disease have trouble remembering recent events. 
Results of the study, in the journal Neuron, suggest that connections within nerve cell networks in the vision-processing center can be strengthened by the neurochemical acetylcholine (ACh), which the brain is thought to secrete after a reward is received. Only nerve cell networks recently stimulated by a flash of light delivered through the goggles are affected by ACh, which in turn allows those nerve networks to associate the visual cue with the reward. Because brain structures are highly conserved in mammals, the findings likely have parallels in humans, they say.
“We’ve discovered that nerve cells in this part of the brain, the primary visual cortex, seem to be able to develop molecular memories, helping us understand how animals learn to predict rewarding outcomes,” says Marshall Hussain Shuler, Ph.D., assistant professor of neuroscience at the Institute for Basic Biomedical Sciences at the Johns Hopkins University School of Medicine. 
To maximize survival, an animal’s brain has to remember what cues precede a positive or negative event, allowing the animal to alter its behavior to increase rewards and decrease mishaps. In the Hopkins-MIT study, the researchers sought clarity about how the brain links visual information to more complex information about time and reward.
The presiding theory, Hussain Shuler says, assumed that this connection was made in areas devoted to “high-level” processing, like the frontal cortex, which is known to be important for learning and memory. The primary visual cortex seemed to simply receive information from the eyes and “re-piece” the visual world together before presenting it to decision-making parts of the brain.

Pavlov’s Rats? Rodents Trained to Link Rewards to Visual Cues

In experiments on rats outfitted with tiny goggles, scientists say they have learned that the brain’s initial vision processing center not only relays visual stimuli, but also can “learn” time intervals and create specifically timed expectations of future rewards. The research, by a team at the Johns Hopkins University School of Medicine and the Massachusetts Institute of Technology, sheds new light on learning and memory-making, the investigators say, and could help explain why people with Alzheimer’s disease have trouble remembering recent events.

Results of the study, in the journal Neuron, suggest that connections within nerve cell networks in the vision-processing center can be strengthened by the neurochemical acetylcholine (ACh), which the brain is thought to secrete after a reward is received. Only nerve cell networks recently stimulated by a flash of light delivered through the goggles are affected by ACh, which in turn allows those nerve networks to associate the visual cue with the reward. Because brain structures are highly conserved in mammals, the findings likely have parallels in humans, they say.

“We’ve discovered that nerve cells in this part of the brain, the primary visual cortex, seem to be able to develop molecular memories, helping us understand how animals learn to predict rewarding outcomes,” says Marshall Hussain Shuler, Ph.D., assistant professor of neuroscience at the Institute for Basic Biomedical Sciences at the Johns Hopkins University School of Medicine.

To maximize survival, an animal’s brain has to remember what cues precede a positive or negative event, allowing the animal to alter its behavior to increase rewards and decrease mishaps. In the Hopkins-MIT study, the researchers sought clarity about how the brain links visual information to more complex information about time and reward.

The presiding theory, Hussain Shuler says, assumed that this connection was made in areas devoted to “high-level” processing, like the frontal cortex, which is known to be important for learning and memory. The primary visual cortex seemed to simply receive information from the eyes and “re-piece” the visual world together before presenting it to decision-making parts of the brain.

Filed under brain nerve cells primary visual cortex memory acetylcholine neuroscience science

free counters