Neuroscience

Articles and news from the latest research reports.

Posts tagged visual system

74 notes

Whole-Brain Activity Maps Reveal Stereotyped, Distributed Networks for Visuomotor Behavior
Most behaviors, even simple innate reflexes, are mediated by circuits of neurons spanning areas throughout the brain. However, in most cases, the distribution and dynamics of firing patterns of these neurons during behavior are not known. We imaged activity, with cellular resolution, throughout the whole brains of zebrafish performing the optokinetic response. We found a sparse, broadly distributed network that has an elaborate but ordered pattern, with a bilaterally symmetrical organization. Activity patterns fell into distinct clusters reflecting sensory and motor processing. By correlating neuronal responses with an array of sensory and motor variables, we find that the network can be clearly divided into distinct functional modules. Comparing aligned data from multiple fish, we find that the spatiotemporal activity dynamics and functional organization are highly stereotyped across individuals. These experiments systematically reveal the functional architecture of neural circuits underlying a sensorimotor behavior in a vertebrate brain.
Full article

Whole-Brain Activity Maps Reveal Stereotyped, Distributed Networks for Visuomotor Behavior

Most behaviors, even simple innate reflexes, are mediated by circuits of neurons spanning areas throughout the brain. However, in most cases, the distribution and dynamics of firing patterns of these neurons during behavior are not known. We imaged activity, with cellular resolution, throughout the whole brains of zebrafish performing the optokinetic response. We found a sparse, broadly distributed network that has an elaborate but ordered pattern, with a bilaterally symmetrical organization. Activity patterns fell into distinct clusters reflecting sensory and motor processing. By correlating neuronal responses with an array of sensory and motor variables, we find that the network can be clearly divided into distinct functional modules. Comparing aligned data from multiple fish, we find that the spatiotemporal activity dynamics and functional organization are highly stereotyped across individuals. These experiments systematically reveal the functional architecture of neural circuits underlying a sensorimotor behavior in a vertebrate brain.

Full article

Filed under zebrafish whole-brain activity neural activity optokinetic response motor neurons visual system neuroscience science

94 notes

These Boosts Are Made For Walkin’: Study Reveals that Movement Kicks Visual System into Higher Gear
Whether you’re a Major League outfielder chasing down a hard-hit ball or a lesser mortal navigating a busy city sidewalk, it pays to keep a close watch on your surroundings when walking or running. Now, new research by UC San Francisco neuroscientists suggests that the body may get help in these fast-changing situations from a specialized brain circuit that causes visual system neurons to fire more strongly during locomotion.
There has been a great deal of research on changes among different brain states during sleep, but the new findings, reported in the March 13 issue of Cell, provide a compelling example of a change in state in the awake brain.
It has long been known that nerve cells in the visual system fire more strongly when we pay close attention to objects than when we view scenes more passively. But the new research, led by Yu Fu, PhD, a postdoctoral fellow in the UCSF lab of senior author Michael P. Stryker, PhD, the W.F. Ganong Professor of Physiology, breaks new ground, mapping out a visual system amplifier that is directly activated by walking or running.
Though this circuit has not yet been shown to exist in humans, Stryker is designing experiments to find out if it does. He said he would be surprised if his group did not identify a similar mechanism in people, since such systems have been found in fruit flies, and the mouse visual system has so far proved to be a good model of many aspects of human vision.
“The sense of touch only tells you about objects that are close, and the auditory system is generally not as sensitive as the visual system to the exact position of objects,” he said. “It seems that it would be generally useful to have vision – the sensory modality that tells you the most about things that are far away – work better as you’re moving through the world.”
Stryker said that the neural system identified in the new work may have evolved to conserve energy, by allowing the brain to operate at less than peak efficiency in less demanding behavioral situations. “When you don’t need your visual system to be in a high-gain state, your brain may use a lot less energy in responding,” said Stryker. “A change in gain when you’re moving is ideally what you’d like to see – the neuron is doing the same thing that it’s always doing, but it’s talking louder to the rest of the brain.”
In the new research, mice were allowed to walk or run freely on a Styrofoam ball suspended on an air cushion while the scientists used a technique known as two-photon imaging to monitor the activation of cells in the primary visual area of the brain, known as V1.
The researchers found that a subset of V1 neurons, those that contain a substance called vasoactive intestinal peptide (VIP), were robustly activated in a time-locked fashion purely by locomotion, even in darkness, while other V1 neurons remained largely silent.
The mice were presented with visual stimuli both while motionless and while moving, and measurements showed that walking could increase the response of V1 neurons by more than 30 percent. Moreover, V1 responses to these stimuli increased or declined in tandem with the activity of VIP neurons, and with the starting or stopping of walking by the mice.
To firmly establish that VIP neurons were responsible for these changes, the researchers used optogenetic techniques, inserting light-sensitive proteins exclusively into VIP neurons. Using light to stimulate just this population of cells, the team found that they could emulate the effects of locomotion – when VIP cells were activated, V1 cells responded more strongly to stimuli, regardless of whether the animals were moving. Conversely, when the researchers specifically targeted and disabled VIP cells, locomotion-induced increases in the response of other V1 cells were abolished.

These Boosts Are Made For Walkin’: Study Reveals that Movement Kicks Visual System into Higher Gear

Whether you’re a Major League outfielder chasing down a hard-hit ball or a lesser mortal navigating a busy city sidewalk, it pays to keep a close watch on your surroundings when walking or running. Now, new research by UC San Francisco neuroscientists suggests that the body may get help in these fast-changing situations from a specialized brain circuit that causes visual system neurons to fire more strongly during locomotion.

There has been a great deal of research on changes among different brain states during sleep, but the new findings, reported in the March 13 issue of Cell, provide a compelling example of a change in state in the awake brain.

It has long been known that nerve cells in the visual system fire more strongly when we pay close attention to objects than when we view scenes more passively. But the new research, led by Yu Fu, PhD, a postdoctoral fellow in the UCSF lab of senior author Michael P. Stryker, PhD, the W.F. Ganong Professor of Physiology, breaks new ground, mapping out a visual system amplifier that is directly activated by walking or running.

Though this circuit has not yet been shown to exist in humans, Stryker is designing experiments to find out if it does. He said he would be surprised if his group did not identify a similar mechanism in people, since such systems have been found in fruit flies, and the mouse visual system has so far proved to be a good model of many aspects of human vision.

“The sense of touch only tells you about objects that are close, and the auditory system is generally not as sensitive as the visual system to the exact position of objects,” he said. “It seems that it would be generally useful to have vision – the sensory modality that tells you the most about things that are far away – work better as you’re moving through the world.”

Stryker said that the neural system identified in the new work may have evolved to conserve energy, by allowing the brain to operate at less than peak efficiency in less demanding behavioral situations. “When you don’t need your visual system to be in a high-gain state, your brain may use a lot less energy in responding,” said Stryker. “A change in gain when you’re moving is ideally what you’d like to see – the neuron is doing the same thing that it’s always doing, but it’s talking louder to the rest of the brain.”

In the new research, mice were allowed to walk or run freely on a Styrofoam ball suspended on an air cushion while the scientists used a technique known as two-photon imaging to monitor the activation of cells in the primary visual area of the brain, known as V1.

The researchers found that a subset of V1 neurons, those that contain a substance called vasoactive intestinal peptide (VIP), were robustly activated in a time-locked fashion purely by locomotion, even in darkness, while other V1 neurons remained largely silent.

The mice were presented with visual stimuli both while motionless and while moving, and measurements showed that walking could increase the response of V1 neurons by more than 30 percent. Moreover, V1 responses to these stimuli increased or declined in tandem with the activity of VIP neurons, and with the starting or stopping of walking by the mice.

To firmly establish that VIP neurons were responsible for these changes, the researchers used optogenetic techniques, inserting light-sensitive proteins exclusively into VIP neurons. Using light to stimulate just this population of cells, the team found that they could emulate the effects of locomotion – when VIP cells were activated, V1 cells responded more strongly to stimuli, regardless of whether the animals were moving. Conversely, when the researchers specifically targeted and disabled VIP cells, locomotion-induced increases in the response of other V1 cells were abolished.

Filed under vision primary visual area vasoactive intestinal peptide neurons visual system neuroscience science

105 notes

Visual System Can Retain Plasticity, Even After Extended Early Blindness

image

Image: Fotolia

Deprivation of vision during critical periods of childhood development has long been thought to result in irreversible vision loss. Now, researchers from the Schepens Eye Research Institute/Massachusetts Eye and Ear, Harvard Medical School (HMS) and Massachusetts Institute of Technology (MIT) have challenged that theory by studying a unique population of pediatric patients who were blind during these critical periods before removal of bilateral cataracts. The researchers found improvement after sight onset in contrast sensitivity tests, which measure basic visual function and have well-understood neural underpinnings. Their results show that the human visual system can retain plasticity beyond critical periods, even after early and extended blindness. Their findings were recently published in the Proceedings of the National Advancement of Science (PNAS) Early Edition.

Read more

Filed under visual system vision loss plasticity critical period neuroscience science

42 notes

Motional layers in the brain
Recognising movement and its direction is one of the first and most important processing steps in any visual system. By this way, nearby predators or prey can be detected and even one’s own movements are controlled. More than fifty years ago, a mathematical model predicted how elementary motion detectors must be structured in the brain. However, which nerve cells perform this job and how they are actually connected remained a mystery. Scientists at the Max Planck Institute of Neurobiology in Martinsried have now come one crucial step closer to this “holy grail of motion vision”: They identified the cells that represent these so-called “elementary motion detectors” in the fruit fly brain. The results show that motion of an observed object is processed in two separate pathways. In each pathway, motion information is processed independently of one another and sorted according to its direction.
Ramón y Cajal, the famous neuroanatomist, was the first to examine the brains of flies. Almost a century ago, he thus discovered a group of cells he described as “curious elements with two tufts”. About 50 years later, German physicist Werner Reichardt postulated from his behavioural experiments with flies that they possess “elementary motion detectors”, as he referred to them. These detectors compare changes in luminance between two neighbouring photoreceptor units, or facets, in the fruit fly’s eye for every point in the visual space. The direction of a local movement is then calculated from this. At least, that is what the theory predicts. Since that time, the fruit fly research community has been speculating about whether these “two-tufted cells” described by Cajal are the mysterious elementary motion detectors.
The answer to this question has been slow in coming, as the tufted cells are extremely small – much too small for sticking an electrode into them and capturing their electrical signals. Now, Alexander Borst and his group at the Max Planck- Institute of Neurobiology have succeeded in making a breakthrough with the aid of a calcium indicator. These fluorescent proteins are formed by the neurons themselves and change their fluorescence when the cells are active. It thus finally became possible for the scientists to observe and measure the activity of the tufted cells under the microscope. The results prove that these cells actually are the elementary motion detectors predicted by Werner Reichardt.
As further experiments have shown, the tufted cells can be divided into two groups. One group (T4 cells) only reacts to a transition from dark to light caused by motion, while the other group (T5 cells) reacts oppositely – only for light-to-dark edges. In every group there are four subgroups, each of which only responds to movements in a specific direction – to the right, left, upwards or downwards. The neurons in these directionally selective groups release their information into layers of subsequent nerve tissue that are completely separated from one another. There, large neurons use these signals for visual flight control, generating the appropriate commands for the flight musculature, for example. This could be impressively proven by the scientists: When they blocked the T4 cells, the neurons connected downstream and the fruit flies themselves were shown in behavioural tests to be blind to motions caused by dark-to-light edges. When the T5 cells were blocked, light-to-dark edges could no longer be perceived.
In discussions about their research results, which have just been published in the scientific journal Nature, both lead authors, Matt Maisak and Jürgen Haag, were very impressed with the “cleanly differentiated, yet highly ordered” motion information within the brains of the fruit flies. Alexander Borst, head of the study, adds: “That was real teamwork – almost all of the members in my department took part in the experiments. One group carried out the calcium measurements, another worked on the electrophysiology, and a third made the behavioural measurements. They all pulled together. It was a wonderful experience.” And it should continue like this, since the scientists are already turning to the next mammoth challenge: they would now like to identify the neurons that deliver the input signals to the elementary motion detectors. According to Reichardt, the two signals coming from neighbouring photoreceptors in the eye have to be delayed in relation to one another. “That is going to be really exciting!” says Alexander Borst.

Motional layers in the brain

Recognising movement and its direction is one of the first and most important processing steps in any visual system. By this way, nearby predators or prey can be detected and even one’s own movements are controlled. More than fifty years ago, a mathematical model predicted how elementary motion detectors must be structured in the brain. However, which nerve cells perform this job and how they are actually connected remained a mystery. Scientists at the Max Planck Institute of Neurobiology in Martinsried have now come one crucial step closer to this “holy grail of motion vision”: They identified the cells that represent these so-called “elementary motion detectors” in the fruit fly brain. The results show that motion of an observed object is processed in two separate pathways. In each pathway, motion information is processed independently of one another and sorted according to its direction.

Ramón y Cajal, the famous neuroanatomist, was the first to examine the brains of flies. Almost a century ago, he thus discovered a group of cells he described as “curious elements with two tufts”. About 50 years later, German physicist Werner Reichardt postulated from his behavioural experiments with flies that they possess “elementary motion detectors”, as he referred to them. These detectors compare changes in luminance between two neighbouring photoreceptor units, or facets, in the fruit fly’s eye for every point in the visual space. The direction of a local movement is then calculated from this. At least, that is what the theory predicts. Since that time, the fruit fly research community has been speculating about whether these “two-tufted cells” described by Cajal are the mysterious elementary motion detectors.

The answer to this question has been slow in coming, as the tufted cells are extremely small – much too small for sticking an electrode into them and capturing their electrical signals. Now, Alexander Borst and his group at the Max Planck- Institute of Neurobiology have succeeded in making a breakthrough with the aid of a calcium indicator. These fluorescent proteins are formed by the neurons themselves and change their fluorescence when the cells are active. It thus finally became possible for the scientists to observe and measure the activity of the tufted cells under the microscope. The results prove that these cells actually are the elementary motion detectors predicted by Werner Reichardt.

As further experiments have shown, the tufted cells can be divided into two groups. One group (T4 cells) only reacts to a transition from dark to light caused by motion, while the other group (T5 cells) reacts oppositely – only for light-to-dark edges. In every group there are four subgroups, each of which only responds to movements in a specific direction – to the right, left, upwards or downwards. The neurons in these directionally selective groups release their information into layers of subsequent nerve tissue that are completely separated from one another. There, large neurons use these signals for visual flight control, generating the appropriate commands for the flight musculature, for example. This could be impressively proven by the scientists: When they blocked the T4 cells, the neurons connected downstream and the fruit flies themselves were shown in behavioural tests to be blind to motions caused by dark-to-light edges. When the T5 cells were blocked, light-to-dark edges could no longer be perceived.

In discussions about their research results, which have just been published in the scientific journal Nature, both lead authors, Matt Maisak and Jürgen Haag, were very impressed with the “cleanly differentiated, yet highly ordered” motion information within the brains of the fruit flies. Alexander Borst, head of the study, adds: “That was real teamwork – almost all of the members in my department took part in the experiments. One group carried out the calcium measurements, another worked on the electrophysiology, and a third made the behavioural measurements. They all pulled together. It was a wonderful experience.” And it should continue like this, since the scientists are already turning to the next mammoth challenge: they would now like to identify the neurons that deliver the input signals to the elementary motion detectors. According to Reichardt, the two signals coming from neighbouring photoreceptors in the eye have to be delayed in relation to one another. “That is going to be really exciting!” says Alexander Borst.

Filed under elementary motion detectors fruit flies visual system photoreceptors neuroscience science

85 notes

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects 
How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?
It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.
Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.
The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.
Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.
"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."
Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.
"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.
The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.
Integrating parts into wholes
In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.
"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."
Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.
Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.
The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.
Dogma in the field is that V4 neurons all exhibit translation invariance.
"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.
For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.
"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."
A deeper mystery to be solved
The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”
On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.
"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.
"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.
"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects

How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?

It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.

Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.

The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.

Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.

"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."

Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.

"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.

The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.

Integrating parts into wholes

In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.

"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."

Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.

Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.

The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.

Dogma in the field is that V4 neurons all exhibit translation invariance.

"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.

For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.

"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."

A deeper mystery to be solved

The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”

On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.

"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.

"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.

"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Filed under visual system visual stimuli visual cortex neurons neuroscience science

54 notes

Brain Imaging Study Eliminates Differences in Visual Function as a Cause of Dyslexia

A new brain imaging study of dyslexia shows that differences in the visual system do not cause the disorder, but instead are likely a consequence. The findings, published today in the journal Neuron, provide important insights into the cause of this common reading disorder and address a long-standing debate about the role of visual symptoms observed in developmental dyslexia.

Dyslexia is the most prevalent of all learning disabilities, affecting about 12 percent of the U.S. population. Beyond the primarily observed reading deficits, individuals with dyslexia often also exhibit subtle weaknesses in processing visual stimuli. Scientists have speculated whether these deficits represent the primary cause of dyslexia, with visual dysfunction directly impacting the ability to learn to read. The current study demonstrates that they do not.

“Our results do not discount the presence of this specific type of visual deficit,” says senior author Guinevere Eden, PhD, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC) and past-president of the International Dyslexia Association. “In fact our results confirm that differences do exist in the visual system of children with dyslexia, but these differences are the end-product of less reading, when compared with typical readers, and are not the cause of their struggles with reading.”

The current study follows a report published by Eden and colleagues in the journal Nature in 1996, the first study of dyslexia to employ functional Magnetic Resonance Imaging (fMRI). As in that study, the new study also shows less activity in a portion of the visual system that processes moving visual information in the dyslexics compared with typical readers of the same age.

This time, however, the research team also studied younger children without dyslexia, matched to the dyslexics on their reading level. “This group looked similar to the dyslexics in terms of brain activity, providing the first clue that the observed difference in the dyslexics relative to their peers may have more to do with reading ability than dyslexia per se,” Eden explains.

Next, the children with dyslexia received a reading intervention. Intensive tutoring of phonological and orthographic skills was provided, addressing the core deficit in dyslexia, which is widely believed to be a weakness in the phonological component of language. As expected, the children made significant gains in reading. In addition, activity in the visual system increased, suggesting it was mobilized by reading.

The researchers point out that these findings could have important implications for practice. “Early identification and treatment of dyslexia should not revolve around these deficits in visual processing,” says Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “While our study showed that there is a strong correlation between people’s reading ability and brain activity in the visual system, it does not mean that training the visual system will result in better reading. We think it is the other way around. Reading is a culturally imposed skill, and neuroscience research has shown that its acquisition results in a range of anatomical and functional changes in the brain.”

The researchers add that their research can be applied more broadly to other disorders. “Our study has important implications in understanding the etiology of dyslexia, but it also is relevant to other conditions where cause and consequence are difficult to pull apart because the brain changes in response to experience,” explains Eden.

(Source: explore.georgetown.edu)

Filed under dyslexia brain activity fMRI brain imaging visual system neuroscience science

134 notes

Rats have a double view of the world
Scientists from the Max Planck Institute for Biological Cybernetics in Tübingen, using miniaturised high-speed cameras and high-speed behavioural tracking, discovered that rats move their eyes in opposite directions in both the horizontal and the vertical plane when running around. Each eye moves in a different direction, depending on the change in the animal’s head position. An analysis of both eyes’ field of view found that the eye movements exclude the possibility that rats fuse the visual information into a single image like humans do. Instead, the eyes move in such a way that enables the space above them to be permanently in view – presumably an adaptation to help them deal with the major threat from predatory birds that rodents face in their natural environment.
Like many mammals, rats have their eyes on the sides of their heads. This gives them a very wide visual field, useful for detection of predators. However, three-dimensional vision requires overlap of the visual fields of the two eyes. Thus, the visual system of these animals needs to meet two conflicting demands at the same time; on the one hand maximum surveillance and on the other hand detailed binocular vision.
The research team from the Max Planck Institute for Biological Cybernetics have now, for the first time, observed and characterised the eye movements of freely moving rats. They fitted minuscule cameras weighing only about one gram to the animals’ heads, which could record the lightning-fast eye movements with great precision. The scientists also used another new method to measure the position and direction of the head, enabling them to reconstruct the rats’ exact line of view at any given time.
The Max Planck scientists’ findings came as a complete surprise. Although rats process visual information from their eyes through very similar brain pathways to other mammals, their eyes evidently move in a totally different way. “Humans move their eyes in a very stereotypical way for both counteracting head movements and searching around. Both our eyes move together and always follow the same object. In rats, on the other hand, the eyes generally move in opposite directions,” explains Jason Kerr from the Max Planck Institute for Biological Cybernetics.
In a series of behavioural experiments, the neurobiologists also discovered that the eye movements largely depend on the position of the animal’s head. “When the head points downward, the eyes move back, away from the tip of the nose. When the rat lifts its head, the eyes look forward: cross-eyed, so to speak. If the animal puts its head on one side, the eye on the lower side moves up and the other eye moves down.” says Jason Kerr.
In humans, the direction in which the eyes look must be precisely aligned, otherwise an object cannot be fixated. A deviation measuring less than a single degree of the field of view is enough to cause double vision. In rats, the opposing eye movements between left and right eye mean that the line of vision varies by as much as 40 degrees in the horizontal plane and up to 60 degrees in the vertical plane. The consequence of these unusual eye movements is that irrespective of vigorous head movements in all planes, the eyes movements always move in such a way to ensure that the area above the animal is always in view simultaneously by both eyes –something that does not occur in any other region of the rat’s visual field.
These unusual eye movements that rats possess appear to be the visual system’s way of adapting to the animals’ living conditions, given that they are preyed upon by numerous species of birds. Although the observed eye movements prevent the fusion of the two visual fields, the scientists postulate that permanent visibility in the direction of potential airborne attackers dramatically increases the animals’ chances of survival.

Rats have a double view of the world

Scientists from the Max Planck Institute for Biological Cybernetics in Tübingen, using miniaturised high-speed cameras and high-speed behavioural tracking, discovered that rats move their eyes in opposite directions in both the horizontal and the vertical plane when running around. Each eye moves in a different direction, depending on the change in the animal’s head position. An analysis of both eyes’ field of view found that the eye movements exclude the possibility that rats fuse the visual information into a single image like humans do. Instead, the eyes move in such a way that enables the space above them to be permanently in view – presumably an adaptation to help them deal with the major threat from predatory birds that rodents face in their natural environment.

Like many mammals, rats have their eyes on the sides of their heads. This gives them a very wide visual field, useful for detection of predators. However, three-dimensional vision requires overlap of the visual fields of the two eyes. Thus, the visual system of these animals needs to meet two conflicting demands at the same time; on the one hand maximum surveillance and on the other hand detailed binocular vision.

The research team from the Max Planck Institute for Biological Cybernetics have now, for the first time, observed and characterised the eye movements of freely moving rats. They fitted minuscule cameras weighing only about one gram to the animals’ heads, which could record the lightning-fast eye movements with great precision. The scientists also used another new method to measure the position and direction of the head, enabling them to reconstruct the rats’ exact line of view at any given time.

The Max Planck scientists’ findings came as a complete surprise. Although rats process visual information from their eyes through very similar brain pathways to other mammals, their eyes evidently move in a totally different way. “Humans move their eyes in a very stereotypical way for both counteracting head movements and searching around. Both our eyes move together and always follow the same object. In rats, on the other hand, the eyes generally move in opposite directions,” explains Jason Kerr from the Max Planck Institute for Biological Cybernetics.

In a series of behavioural experiments, the neurobiologists also discovered that the eye movements largely depend on the position of the animal’s head. “When the head points downward, the eyes move back, away from the tip of the nose. When the rat lifts its head, the eyes look forward: cross-eyed, so to speak. If the animal puts its head on one side, the eye on the lower side moves up and the other eye moves down.” says Jason Kerr.

In humans, the direction in which the eyes look must be precisely aligned, otherwise an object cannot be fixated. A deviation measuring less than a single degree of the field of view is enough to cause double vision. In rats, the opposing eye movements between left and right eye mean that the line of vision varies by as much as 40 degrees in the horizontal plane and up to 60 degrees in the vertical plane. The consequence of these unusual eye movements is that irrespective of vigorous head movements in all planes, the eyes movements always move in such a way to ensure that the area above the animal is always in view simultaneously by both eyes –something that does not occur in any other region of the rat’s visual field.

These unusual eye movements that rats possess appear to be the visual system’s way of adapting to the animals’ living conditions, given that they are preyed upon by numerous species of birds. Although the observed eye movements prevent the fusion of the two visual fields, the scientists postulate that permanent visibility in the direction of potential airborne attackers dramatically increases the animals’ chances of survival.

Filed under rats eye movements binocular vision double vision visual system neuroscience science

32 notes

Eyes on the prey: Researchers analyse the hunting behaviour of fish larvae in virtual reality
Moving objects attract greater attention – a fact exploited by video screens in public spaces and animated advertising banners on the Internet. For most animal species, moving objects also play a major role in the processing of sensory impressions in the brain, as they often signal the presence of a welcome prey or an imminent threat. This is also true of the zebrafish larva, which has to react to the movements of its prey. Scientists at the Max Planck Institute for Medical Research in Heidelberg have investigated how the brain uses the information from the visual system for the execution of quicker movements. The animals’ visual system records the movements of the prey so that the brain can redirect the animals’ movements through targeted swim bouts in a matter of milliseconds. Two hitherto unknown types of neurons in the mid-brain are involved in the processing of movement stimuli.
In principle, the visual system of zebrafish larvae resembles that of other vertebrates. Moreover, its genome has been decoded, it is a small organism, and it has transparent skin, which is easily penetrated by light in the fluorescent microscope. Therefore, these animals are very suitable for studying visual motion perception. They also display very clear prey capture behaviour. With the help of their finely-tuned visual system, they pursue and catch small ciliates. To do this, they execute a series of swimming manoeuvres in a matter of one or two seconds, during which they repeatedly verify the direction and distance of the prey so that they can adapt their subsequent movement steps. The larva’s brain must, therefore, filter and evaluate visual information extremely rapidly so that it can select appropriate motor patterns.
Using high-speed video recordings, researchers working with Johann Bollmann at the Max Planck Institute for Medical Research began by studying the natural course of prey capture by the larvae under a variety of starting conditions. It emerged that the larvae repeatedly execute a basic motion pattern and can apply an orientation component that re-directs the hunter towards the prey with each swim bout. To do this, the larvae must process visual information in just a few hundreds of milliseconds.
Using an innovative experimental design, the scientists then modelled, in a second step, the natural swimming environment as a “virtual reality”, in which the larvae execute typical prey capture sequences without actually moving. The virtual prey consisted of computer-controlled images, which were projected onto a small screen. In this way, the role of motion parameters, for example the size and speed of the “prey”, could be studied quantitatively in relation to the processing of visual stimuli by the animals.
In the “virtual reality”, the scientists can test how the fish larvae respond to unexpected shifts in the prey after a swim bout. “When we direct our gaze at a target through movements of our eyes and head, we expect the object to appear in a central position in our field of view. In the larvae, very slight deviations from the target position or delays in the re-appearance of the virtual prey increased the reaction times. When it receives unexpected visual feedback, the larva’s brain presumably needs extra processing time to calculate the next swim bout,” explains Johann Bollmann from the Max Planck Institute in Heidelberg.
In addition, with the help of fluorescent microscopes, the researchers can examine the activity of groups of neurons in the larval brain which are likely to control the targeted prey capture movements. In a previous study, they discovered cell types that react specifically to opposing directions of movement. These previously unknown neurons in the dorsal region of the midbrain (tectum) differ in their directional sensitivity and in the structure of their finely branched projections. “It appears that different directions of motion are processed in different layers of the tectum, since the dendritic ramifications of these cell types are spatially separated from each other,” says Bollmann.

Eyes on the prey: Researchers analyse the hunting behaviour of fish larvae in virtual reality

Moving objects attract greater attention – a fact exploited by video screens in public spaces and animated advertising banners on the Internet. For most animal species, moving objects also play a major role in the processing of sensory impressions in the brain, as they often signal the presence of a welcome prey or an imminent threat. This is also true of the zebrafish larva, which has to react to the movements of its prey. Scientists at the Max Planck Institute for Medical Research in Heidelberg have investigated how the brain uses the information from the visual system for the execution of quicker movements. The animals’ visual system records the movements of the prey so that the brain can redirect the animals’ movements through targeted swim bouts in a matter of milliseconds. Two hitherto unknown types of neurons in the mid-brain are involved in the processing of movement stimuli.

In principle, the visual system of zebrafish larvae resembles that of other vertebrates. Moreover, its genome has been decoded, it is a small organism, and it has transparent skin, which is easily penetrated by light in the fluorescent microscope. Therefore, these animals are very suitable for studying visual motion perception. They also display very clear prey capture behaviour. With the help of their finely-tuned visual system, they pursue and catch small ciliates. To do this, they execute a series of swimming manoeuvres in a matter of one or two seconds, during which they repeatedly verify the direction and distance of the prey so that they can adapt their subsequent movement steps. The larva’s brain must, therefore, filter and evaluate visual information extremely rapidly so that it can select appropriate motor patterns.

Using high-speed video recordings, researchers working with Johann Bollmann at the Max Planck Institute for Medical Research began by studying the natural course of prey capture by the larvae under a variety of starting conditions. It emerged that the larvae repeatedly execute a basic motion pattern and can apply an orientation component that re-directs the hunter towards the prey with each swim bout. To do this, the larvae must process visual information in just a few hundreds of milliseconds.

Using an innovative experimental design, the scientists then modelled, in a second step, the natural swimming environment as a “virtual reality”, in which the larvae execute typical prey capture sequences without actually moving. The virtual prey consisted of computer-controlled images, which were projected onto a small screen. In this way, the role of motion parameters, for example the size and speed of the “prey”, could be studied quantitatively in relation to the processing of visual stimuli by the animals.

In the “virtual reality”, the scientists can test how the fish larvae respond to unexpected shifts in the prey after a swim bout. “When we direct our gaze at a target through movements of our eyes and head, we expect the object to appear in a central position in our field of view. In the larvae, very slight deviations from the target position or delays in the re-appearance of the virtual prey increased the reaction times. When it receives unexpected visual feedback, the larva’s brain presumably needs extra processing time to calculate the next swim bout,” explains Johann Bollmann from the Max Planck Institute in Heidelberg.

In addition, with the help of fluorescent microscopes, the researchers can examine the activity of groups of neurons in the larval brain which are likely to control the targeted prey capture movements. In a previous study, they discovered cell types that react specifically to opposing directions of movement. These previously unknown neurons in the dorsal region of the midbrain (tectum) differ in their directional sensitivity and in the structure of their finely branched projections. “It appears that different directions of motion are processed in different layers of the tectum, since the dendritic ramifications of these cell types are spatially separated from each other,” says Bollmann.

Filed under zebrafish prey capture visual system goal-directed behavior motion perception neuroscience science

127 notes

Temporal Processing in the Olfactory System: Can We See a Smell?
Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

Temporal Processing in the Olfactory System: Can We See a Smell?

Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

Filed under olfactory system neurons neural activity visual system retina odorants neuroscience science

88 notes

Congenitally absent optic chiasm: Making sense of visual pathways
One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.
When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.
By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.
The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.
It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.
The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Congenitally absent optic chiasm: Making sense of visual pathways

One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.

When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.

By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.

The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.

It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.

The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Filed under visual system optic nerves congenital achiasma primary visual cortex neuroscience science

free counters