Neuroscience

Articles and news from the latest research reports.

Posts tagged visual cortex

165 notes

Neurons subtract images and use the differences
Efficient reduction of data volumes
Researchers have hitherto assumed that information supplied by the sense of sight was transmitted almost in its entirety from its entry point to higher brain areas, across which visual sensation is generated. “It was therefore a surprise to discover that the data volumes are considerably reduced as early as in the primary visual cortex, the bottleneck leading to the cerebrum,” says PD Dr Dirk Jancke from the Institute for Neural Computation at the Ruhr-Universität. “We intuitively assume that our visual system generates a continuous stream of images, just like a video camera. However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences.”
Plus or minus: the brain’s two coding strategies
The researchers recorded the neurons’ responses to natural image sequences, for example vegetation landscapes or buildings. They created two versions of the images: a complete one and one in which they had systematically removed certain elements, specifically vertical or horizontal contours. If the time elapsing between the individual images was short, i.e. 30 milliseconds, the neurons represented complete image information. That changed when the time elapsing in the sequences was longer than 100 milliseconds. Now, the neurons represented only those elements that were new or missing, namely image differences. “When we analyse a scene, the eyes perform very fast miniature movements in order to register the fine details,” explains Nora Nortmann, postgraduate student at the Institute of Cognitive Science at the University of Osnabrück and the RUB work group Optical Imaging. The information regarding those details are forwarded completely and immediately by the primary visual cortex. “If, on the other hand, the time elapsing between the gaze changes is longer, the cortex codes only those aspects in the images that have changed,” continues Nora Nortmann. Thus, certain image sections stand out and interesting spots are easier to detect, as the researchers speculate.
“Our brain is permanently looking into the future”
This study illustrates how activities of visual neurons are influenced by past events. “The neurons build up a short-term memory that incorporates constant input,” explains Dirk Jancke. However, if something changes abruptly in the perceived image, the brain generates a kind of error message on the basis of the past images. Those signals do not reflect the current input, but the way the current input deviates from the expectations. Researchers have hitherto postulated that this so-called predictive coding only takes place in higher brain areas. “We demonstrated that the principle applies for earlier phases of cortical processing, too,” concludes Jancke. “Our brain is permanently looking into the future and comparing current input with the expectations that arose based on past situations.”
Observing brain activities in millisecond range
In order to monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active. Thanks to a high-resolution camera system and the subsequent computer-aided analysis, the neuronal activity can be measured across a surface of several square millimetres. The result is a temporally and spatially precise film of transmission processes within neuronal networks.
Bibliographic record
N. Nortmann, S. Rekauzke, S. Onat, P. König, D. Jancke (2013): Primary visual cortex represents the difference between past and present, Cerebral Cortex

Neurons subtract images and use the differences

Efficient reduction of data volumes

Researchers have hitherto assumed that information supplied by the sense of sight was transmitted almost in its entirety from its entry point to higher brain areas, across which visual sensation is generated. “It was therefore a surprise to discover that the data volumes are considerably reduced as early as in the primary visual cortex, the bottleneck leading to the cerebrum,” says PD Dr Dirk Jancke from the Institute for Neural Computation at the Ruhr-Universität. “We intuitively assume that our visual system generates a continuous stream of images, just like a video camera. However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences.”

Plus or minus: the brain’s two coding strategies

The researchers recorded the neurons’ responses to natural image sequences, for example vegetation landscapes or buildings. They created two versions of the images: a complete one and one in which they had systematically removed certain elements, specifically vertical or horizontal contours. If the time elapsing between the individual images was short, i.e. 30 milliseconds, the neurons represented complete image information. That changed when the time elapsing in the sequences was longer than 100 milliseconds. Now, the neurons represented only those elements that were new or missing, namely image differences. “When we analyse a scene, the eyes perform very fast miniature movements in order to register the fine details,” explains Nora Nortmann, postgraduate student at the Institute of Cognitive Science at the University of Osnabrück and the RUB work group Optical Imaging. The information regarding those details are forwarded completely and immediately by the primary visual cortex. “If, on the other hand, the time elapsing between the gaze changes is longer, the cortex codes only those aspects in the images that have changed,” continues Nora Nortmann. Thus, certain image sections stand out and interesting spots are easier to detect, as the researchers speculate.

“Our brain is permanently looking into the future”

This study illustrates how activities of visual neurons are influenced by past events. “The neurons build up a short-term memory that incorporates constant input,” explains Dirk Jancke. However, if something changes abruptly in the perceived image, the brain generates a kind of error message on the basis of the past images. Those signals do not reflect the current input, but the way the current input deviates from the expectations. Researchers have hitherto postulated that this so-called predictive coding only takes place in higher brain areas. “We demonstrated that the principle applies for earlier phases of cortical processing, too,” concludes Jancke. “Our brain is permanently looking into the future and comparing current input with the expectations that arose based on past situations.”

Observing brain activities in millisecond range

In order to monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active. Thanks to a high-resolution camera system and the subsequent computer-aided analysis, the neuronal activity can be measured across a surface of several square millimetres. The result is a temporally and spatially precise film of transmission processes within neuronal networks.

Bibliographic record

N. Nortmann, S. Rekauzke, S. Onat, P. König, D. Jancke (2013): Primary visual cortex represents the difference between past and present, Cerebral Cortex

Filed under neurons neural activity visual cortex image processing predictive coding neuroscience science

307 notes

A critical theory in brain development
Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 
Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.
The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 
During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.
The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.
Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

A critical theory in brain development

Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 

Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.

The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 

During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.

The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.

Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

Filed under brain development synaptic plasticity neurotransmitters visual cortex vision neurons neuroscience science

181 notes

New Theory of Synapse Formation in the Brain
The human brain keeps changing throughout a person’s lifetime. New connections are continually created while synapses that are no longer in use degenerate. To date, little is known about the mechanisms behind these processes. Jülich neuroinformatician Dr. Markus Butz has now been able to ascribe the formation of new neural networks in the visual cortex to a simple homeostatic rule that is also the basis of many other self-regulating processes in nature. With this explanation, he and his colleague Dr. Arjen van Ooyen from Amsterdam also provide a new theory on the plasticity of the brain – and a novel approach to understanding learning processes and treating brain injuries and diseases.
The brains of adult humans are by no means hard wired. Scientists have repeatedly established this fact over the last few years using different imaging techniques. This so-called neuroplasticity not only plays a key role in learning processes, it also enables the brain to recover from injuries and compensate for the loss of functions. Researchers only recently found out that even in the adult brain, not only do existing synapses adapt to new circumstances, but new connections are constantly formed and reorganized. However, it was not yet known how these natural rearrangement processes are controlled in the brain. In the open-access journal PLOS Computational Biology, Butz and van Ooyen now present a simple rule that explains how these new networks of neurons are formed.
"It’s very likely that the structural plasticity of the brain is the basis for long-term memory formation," says Markus Butz, who has been working at the recently established Simulation Laboratory Neuroscience at the Jülich Supercomputing Centre for the past few months. "And it’s not just about learning. Following the amputation of extremities, brain injury, the onset of neurodegenerative diseases, and strokes, huge numbers of new synapses are formed in order to adapt the brain to the lasting changes in the patterns of incoming stimuli."
Activity regulates synapse formation
Τhese results show that the formation of new synapses is driven by the tendency of neurons to maintain a ‘pre-set’ electrical activity level. If the average electric activity falls below a certain threshold, the neurons begin to actively build new contact points. These are the basis for new synapses that deliver additional input – the neuron firing rate increases. This also works the other way round: as soon as the activity level exceeds an upper limit, the number of synaptic connections is reduced to prevent any overexcitation – the neuron firing rate falls. Similar forms of homeostasis frequently occur in nature, for example in the regulation of body temperature and blood sugar levels.
However, Markus Butz stresses that this does not work without a certain minimal excitation of the neurons: “A neuron that no longer receives any stimuli loses even more synapses and will die off after some time. We must take this restriction into account if we want the results of our simulations to agree with observations.” Using the visual cortex as an example, the neuroscientists have studied the principles according to which neurons form new connections and abandon existing synapses. In this region of the brain, about 10% of the synapses are continuously regenerated. When the retina is damaged, this percentage increases even further. Using computer simulations, the authors succeeded in reconstructing the reorganization of the neurons in a way that conforms to experimental results from the visual cortex of mice and monkeys with damaged retinas.
The visual cortex is particularly suitable for demonstrating the new growth rule, because it has a property referred to as retinotopy: This means that points projected beside each other onto the retina are also arranged beside each other when they are projected onto the visual cortex, just like on a map. If areas of the retina are damaged, the cells onto which the associated images are projected receive different inputs. “In our simulations, you can see that areas which no longer receive any input from the retina start to build crosslinks, which allow them to receive more signals from their neighbouring cells,” says Markus Butz. These crosslinks are formed slowly from the edge of the damaged area towards the centre, in a process resembling the healing of a wound, until the original activity level is more or less restored.
Synaptic and structural plasticity
"The new growth rule provides structural plasticity with a principle that is almost as simple as that of synaptic plasticity," says co-author Arjen van Ooyen, who has been working on models for the development of neural networks for decades. As early as 1949, psychology professor Donald Olding Hebb discovered that connections between neurons that are frequently activated will become stronger. Those that exchange little information will become weaker. Today, many scientists believe that this Hebbian principle plays a central role in learning and memory processes. While synaptic plasticity in involved primarily in short-term processes that take from a few milliseconds to several hours, structural plasticity extends over longer time scales, from several days to months.
Structural plasticity therefore plays a particularly important part during the (early) rehabilitation phase of patients affected by neurological diseases, which also lasts for weeks and months. The vision driving the project is that valuable ideas for the treatment of stroke patients could result from accurate predictions of synapse formation. If doctors knew how the brain structure of a patient will change and reorganize during treatment, they could determine the ideal times for phases of stimulation and rest, thus improving treatment efficiency.
New approach for numerous applications
"It was previously assumed that structural plasticity also follows the principle of Hebbian plasticity. The findings suggest that structural plasticity is governed by the homeostatic principle instead, which was not taken into consideration before," says Prof. Abigail Morrison, head of the Simulation Laboratory Neuroscience at Jülich. Her team is already integrating the new rule into the freely accessible simulation software NEST, which is used by numerous scientists worldwide.
These findings are also of relevance for the Human Brain Project. Neuroscientists, medical scientists, computer scientists, physicists, and mathematicians in Europe are working hand in hand to simulate the entire human brain on high-performance computers of the next generation in order to better understand how it functions. “Due to the complex synaptic circuitry in the human brain, it’s not plausible that its fault tolerance and flexibility are achieved based on static connection rules. Models are therefore required for a self-organization process,” says Prof. Markus Diesmann from Jülich’s Institute of Neuroscience and Medicine, who is involved in the project. He heads Computational and Systems Neuroscience (INM-6), a subinstitute working at the interface between neuroscientific research and simulation technology.

New Theory of Synapse Formation in the Brain

The human brain keeps changing throughout a person’s lifetime. New connections are continually created while synapses that are no longer in use degenerate. To date, little is known about the mechanisms behind these processes. Jülich neuroinformatician Dr. Markus Butz has now been able to ascribe the formation of new neural networks in the visual cortex to a simple homeostatic rule that is also the basis of many other self-regulating processes in nature. With this explanation, he and his colleague Dr. Arjen van Ooyen from Amsterdam also provide a new theory on the plasticity of the brain – and a novel approach to understanding learning processes and treating brain injuries and diseases.

The brains of adult humans are by no means hard wired. Scientists have repeatedly established this fact over the last few years using different imaging techniques. This so-called neuroplasticity not only plays a key role in learning processes, it also enables the brain to recover from injuries and compensate for the loss of functions. Researchers only recently found out that even in the adult brain, not only do existing synapses adapt to new circumstances, but new connections are constantly formed and reorganized. However, it was not yet known how these natural rearrangement processes are controlled in the brain. In the open-access journal PLOS Computational Biology, Butz and van Ooyen now present a simple rule that explains how these new networks of neurons are formed.

"It’s very likely that the structural plasticity of the brain is the basis for long-term memory formation," says Markus Butz, who has been working at the recently established Simulation Laboratory Neuroscience at the Jülich Supercomputing Centre for the past few months. "And it’s not just about learning. Following the amputation of extremities, brain injury, the onset of neurodegenerative diseases, and strokes, huge numbers of new synapses are formed in order to adapt the brain to the lasting changes in the patterns of incoming stimuli."

Activity regulates synapse formation

Τhese results show that the formation of new synapses is driven by the tendency of neurons to maintain a ‘pre-set’ electrical activity level. If the average electric activity falls below a certain threshold, the neurons begin to actively build new contact points. These are the basis for new synapses that deliver additional input – the neuron firing rate increases. This also works the other way round: as soon as the activity level exceeds an upper limit, the number of synaptic connections is reduced to prevent any overexcitation – the neuron firing rate falls. Similar forms of homeostasis frequently occur in nature, for example in the regulation of body temperature and blood sugar levels.

However, Markus Butz stresses that this does not work without a certain minimal excitation of the neurons: “A neuron that no longer receives any stimuli loses even more synapses and will die off after some time. We must take this restriction into account if we want the results of our simulations to agree with observations.” Using the visual cortex as an example, the neuroscientists have studied the principles according to which neurons form new connections and abandon existing synapses. In this region of the brain, about 10% of the synapses are continuously regenerated. When the retina is damaged, this percentage increases even further. Using computer simulations, the authors succeeded in reconstructing the reorganization of the neurons in a way that conforms to experimental results from the visual cortex of mice and monkeys with damaged retinas.

The visual cortex is particularly suitable for demonstrating the new growth rule, because it has a property referred to as retinotopy: This means that points projected beside each other onto the retina are also arranged beside each other when they are projected onto the visual cortex, just like on a map. If areas of the retina are damaged, the cells onto which the associated images are projected receive different inputs. “In our simulations, you can see that areas which no longer receive any input from the retina start to build crosslinks, which allow them to receive more signals from their neighbouring cells,” says Markus Butz. These crosslinks are formed slowly from the edge of the damaged area towards the centre, in a process resembling the healing of a wound, until the original activity level is more or less restored.

Synaptic and structural plasticity

"The new growth rule provides structural plasticity with a principle that is almost as simple as that of synaptic plasticity," says co-author Arjen van Ooyen, who has been working on models for the development of neural networks for decades. As early as 1949, psychology professor Donald Olding Hebb discovered that connections between neurons that are frequently activated will become stronger. Those that exchange little information will become weaker. Today, many scientists believe that this Hebbian principle plays a central role in learning and memory processes. While synaptic plasticity in involved primarily in short-term processes that take from a few milliseconds to several hours, structural plasticity extends over longer time scales, from several days to months.

Structural plasticity therefore plays a particularly important part during the (early) rehabilitation phase of patients affected by neurological diseases, which also lasts for weeks and months. The vision driving the project is that valuable ideas for the treatment of stroke patients could result from accurate predictions of synapse formation. If doctors knew how the brain structure of a patient will change and reorganize during treatment, they could determine the ideal times for phases of stimulation and rest, thus improving treatment efficiency.

New approach for numerous applications

"It was previously assumed that structural plasticity also follows the principle of Hebbian plasticity. The findings suggest that structural plasticity is governed by the homeostatic principle instead, which was not taken into consideration before," says Prof. Abigail Morrison, head of the Simulation Laboratory Neuroscience at Jülich. Her team is already integrating the new rule into the freely accessible simulation software NEST, which is used by numerous scientists worldwide.

These findings are also of relevance for the Human Brain Project. Neuroscientists, medical scientists, computer scientists, physicists, and mathematicians in Europe are working hand in hand to simulate the entire human brain on high-performance computers of the next generation in order to better understand how it functions. “Due to the complex synaptic circuitry in the human brain, it’s not plausible that its fault tolerance and flexibility are achieved based on static connection rules. Models are therefore required for a self-organization process,” says Prof. Markus Diesmann from Jülich’s Institute of Neuroscience and Medicine, who is involved in the project. He heads Computational and Systems Neuroscience (INM-6), a subinstitute working at the interface between neuroscientific research and simulation technology.

Filed under synapses synapse formation neuroplasticity brain injury visual cortex neuroscience science

77 notes

Finding the place where the brain creates illusory shapes and surfaces

The logo of the 1984 Los Angeles Olympics includes red, white and blue stars, but the white star is not really there: It is an illusion. Similarly, the “S” in the USA Network logo is wholly illusory.

image

Both of these logos take advantage of a common perceptual illusion where the brain, when viewing a fragmented background, frequently sees shapes and surfaces that don’t really exist.

“It’s hallucinating without taking drugs,” said Alexander Maier, assistant professor of psychology at Vanderbilt University, who headed a team of neuroscientists who has pinpointed the area of the brain that is responsible for these “illusory contours.”

In the Sept. 30 online early edition of the Proceedings of the National Academy of Sciences, Maier’s team reported that they have discovered groups of neurons in a region of the visual cortex called V4 that fire when an individual is viewing a pattern that produces such an illusion and remain quiescent when viewing an almost identical pattern that doesn’t.

Studies have shown that a diverse range of species, including monkeys, cats, owls, goldfish and even honeybees perceive these illusory contours. This has led scientists to propose that they are the byproduct of methods that the brain has evolved to spot predators or prey hiding in the bushes, a capability with considerable survival value.

Although scientists discovered illusory contours more than a century ago, it is only in the last 30 years that they have begun studying them because they reveal the internal mechanisms that the brain uses to interpret sensory input.

image

The gold square marks the location in the V4 region of a macaque’s visual cortex, where the neurons respond to visual contours. (Alex Maier, Donna Pritchett / Vanderbilt)

In mammals, visual stimuli is processed in the back of the brain in an area called the visual cortex. Efforts to map this area have found that it is made up of five different regions at the back of brain (labeled V1 to V5.)

The primary visual cortex, V1, takes the stimuli coming from the eyes and sorts it by a variety of basic properties, including orientation, color and spatial variation. It also splits the information into two pathways, called the dorsal and ventral streams.

From V1, both streams are routed to the second major area of the visual cortex. V2 performs many of the same functions as V1 but adds some more complex processing, such as recognizing the disparities in the signals coming from the two eyes that produce binocular vision.

From V2, one pathway, sometimes called the “Where Pathway,” goes to V5 and is associated with object location and motion detection. The other pathway, sometimes called the “What Pathway,” goes to V4 and is associated with object representation and form recognition.

“Studies have shown that V4 is involved in both object recognition and visual attention, so we thought it might also be involved with illusory contours,” said Michele Cox, the Vanderbilt graduate student who is first author on the study.

image

A Kanizsa square (Courtesy of D. Alan Stubbs, University of Maine)

First, the researchers searched for the neurons in V4 that were associated with different locations in the retinas of macaque monkeys. Once these maps were complete, they rewarded the monkeys for staring at a screen containing an example of an illusory contour called a Kanizsa square. This consists of four “Pac-Man” figures with their “mouths” oriented to form the corners of a square. When black Pac-Men are placed on a white background, the brain creates a bright white square connecting them.

While the monkeys were looking at the Kanizsa square, the researchers discovered that the neurons that represented the area in the middle of the Pac-Men, the area covered by the illusory square, began firing. However, when the monkeys viewed the same four Pac-Men with their mouths facing outward – an orientation that doesn’t produce the illusion – these central neurons remained silent.

“Basically, the brain is acting like a detective,” said Maier. “It is responding to cues in the environment and making its best guesses about how they fit together. In the case of these illusions, however, it comes to an incorrect conclusion.”

image

Two graphs show the activity of neurons in V4 associated with the position of the illusory Kanizsa square. The percentage of neurons firing more than doubles when the monkey views pac-men with their mouths facing inward to produce the illusion (top) compared to their activity level when the monkey is viewing pac-men with their mouths facing outward (bottom). (Michelle Cox and Alex Maier / Vanderbilt)

(Source: news.vanderbilt.edu)

Filed under illusion illusory contours visual cortex neurons neuroscience science

180 notes

Researchers discover how inhibitory neurons behave during critical periods of learning
We’ve all heard the saying “you can’t teach an old dog new tricks.” Now neuroscientists are beginning to explain the science behind the adage.
For years, neuroscientists have struggled to understand how the microcircuitry of the brain makes learning easier for the young, and more difficult for the old. New findings published in the journal Nature by Carnegie Mellon University, the University of California, Los Angeles and the University of California, Irvine show how one component of the brain’s circuitry — inhibitory neurons — behave during critical periods of learning.
The brain is made up of two types of cells — inhibitory and excitatory neurons. Networks of these two kinds of neurons are responsible for processing sensory information like images, sounds and smells, and for cognitive functioning. About 80 percent of neurons are excitatory. Traditional scientific tools only allowed scientists to study the excitatory neurons.
"We knew from previous studies that excitatory cells propagate information. We also knew that inhibitory neurons played a critical role in setting up heightened plasticity in the young, but ideas about what exactly those cells were doing were controversial. Since we couldn’t study the cells, we could only hypothesize how they were behaving during critical learning periods," said Sandra J. Kuhlman, assistant professor of biological sciences at Carnegie Mellon and member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition.
The prevailing theory on inhibitory neurons was that, as they mature, they reach an increased level of activity that fosters optimal periods of learning. But as the brain ages into adulthood and the inhibitory neurons continue to mature, they become even stronger to the point where they impede learning.
Newly developed genetic and imaging technologies are now allowing researchers to visualize inhibitory neurons in the brain and record their activity in response to a variety of stimuli. As a postdoctoral student at UCLA in the laboratory of Associate Professor of Neurobiology Joshua T. Trachtenberg, Kuhlman and her colleagues used these new techniques to record the activity of inhibitory neurons during critical learning periods. They found that, during heightened periods of learning, the inhibitory neurons didn’t fire more as had been expected. They fired much less frequently — up to half as often.
"When you’re young you haven’t experienced much, so your brain needs to be a sponge that soaks up all types of information. It seems that the brain turns off the inhibitory cells in order to allow this to happen," Kuhlman said. "As adults we’ve already learned a great number of things, so our brains don’t necessarily need to soak up every piece of information. This doesn’t mean that adults can’t learn, it just means when they learn, their neurons need to behave differently."
(Image credit)

Researchers discover how inhibitory neurons behave during critical periods of learning

We’ve all heard the saying “you can’t teach an old dog new tricks.” Now neuroscientists are beginning to explain the science behind the adage.

For years, neuroscientists have struggled to understand how the microcircuitry of the brain makes learning easier for the young, and more difficult for the old. New findings published in the journal Nature by Carnegie Mellon University, the University of California, Los Angeles and the University of California, Irvine show how one component of the brain’s circuitry — inhibitory neurons — behave during critical periods of learning.

The brain is made up of two types of cells — inhibitory and excitatory neurons. Networks of these two kinds of neurons are responsible for processing sensory information like images, sounds and smells, and for cognitive functioning. About 80 percent of neurons are excitatory. Traditional scientific tools only allowed scientists to study the excitatory neurons.

"We knew from previous studies that excitatory cells propagate information. We also knew that inhibitory neurons played a critical role in setting up heightened plasticity in the young, but ideas about what exactly those cells were doing were controversial. Since we couldn’t study the cells, we could only hypothesize how they were behaving during critical learning periods," said Sandra J. Kuhlman, assistant professor of biological sciences at Carnegie Mellon and member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition.

The prevailing theory on inhibitory neurons was that, as they mature, they reach an increased level of activity that fosters optimal periods of learning. But as the brain ages into adulthood and the inhibitory neurons continue to mature, they become even stronger to the point where they impede learning.

Newly developed genetic and imaging technologies are now allowing researchers to visualize inhibitory neurons in the brain and record their activity in response to a variety of stimuli. As a postdoctoral student at UCLA in the laboratory of Associate Professor of Neurobiology Joshua T. Trachtenberg, Kuhlman and her colleagues used these new techniques to record the activity of inhibitory neurons during critical learning periods. They found that, during heightened periods of learning, the inhibitory neurons didn’t fire more as had been expected. They fired much less frequently — up to half as often.

"When you’re young you haven’t experienced much, so your brain needs to be a sponge that soaks up all types of information. It seems that the brain turns off the inhibitory cells in order to allow this to happen," Kuhlman said. "As adults we’ve already learned a great number of things, so our brains don’t necessarily need to soak up every piece of information. This doesn’t mean that adults can’t learn, it just means when they learn, their neurons need to behave differently."

(Image credit)

Filed under inhibitory neurons learning cognitive functioning plasticity visual cortex neuroscience science

85 notes

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects 
How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?
It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.
Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.
The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.
Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.
"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."
Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.
"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.
The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.
Integrating parts into wholes
In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.
"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."
Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.
Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.
The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.
Dogma in the field is that V4 neurons all exhibit translation invariance.
"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.
For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.
"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."
A deeper mystery to be solved
The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”
On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.
"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.
"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.
"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects

How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?

It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.

Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.

The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.

Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.

"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."

Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.

"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.

The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.

Integrating parts into wholes

In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.

"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."

Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.

Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.

The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.

Dogma in the field is that V4 neurons all exhibit translation invariance.

"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.

For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.

"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."

A deeper mystery to be solved

The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”

On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.

"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.

"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.

"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Filed under visual system visual stimuli visual cortex neurons neuroscience science

69 notes

Hit a 95 mph baseball? Scientists pinpoint how we see it coming

How does San Francisco Giants slugger Pablo Sandoval swat a 95 mph fastball, or tennis icon Venus Williams see the oncoming ball, let alone return her sister Serena’s 120 mph serves? For the first time, vision scientists at the University of California, Berkeley, have pinpointed how the brain tracks fast-moving objects.

The discovery advances our understanding of how humans predict the trajectory of moving objects when it can take one-tenth of a second for the brain to process what the eye sees.

image

That 100-millisecond holdup means that in real time, a tennis ball moving at 120 mph would have already advanced 15 feet before the brain registers the ball’s location. If our brains couldn’t make up for this visual processing delay, we’d be constantly hit by balls, cars and more.

Thankfully, the brain “pushes” forward moving objects so we perceive them as further along in their trajectory than the eye can see, researchers said.

“For the first time, we can see this sophisticated prediction mechanism at work in the human brain,” said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley and lead author of the paper published today (May 8) in the journal, Neuron.

A clearer understanding of how the brain processes visual input – in this case life in motion – can eventually help in diagnosing and treating myriad disorders, including those that impair motion perception. People who cannot perceive motion cannot predict locations of objects and therefore cannot perform tasks as simple as pouring a cup of coffee or crossing a road, researchers said.

This study is also likely to have a major impact on other studies of the brain. Its findings come just as the Obama Administration initiates its push to create a Brain Activity Map Initiative, which will further pave the way for scientists to create a roadmap of human brain circuits, as was done for the Human Genome Project.

Using functional Magnetic Resonance Imaging (fMRI) Gerrit and fellow UC Berkeley researchers Jason Fischer and David Whitney located the part of the visual cortex that makes calculations to compensate for our sluggish visual processing abilities. They saw this prediction mechanism in action, and their findings suggest that the middle temporal region of the visual cortex known as V5 is computing where moving objects are most likely to end up.

For the experiment, six volunteers had their brains scanned, via fMRI, as they viewed the “flash-drag effect,”(a, b) a visual illusion in which we see brief flashes shifting in the direction of the motion.

“The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” Maus said.

The researchers found that the illusion – flashes perceived in their predicted locations against a moving background and flashes actually shown in their predicted location against a still background – created the same neural activity patterns in the V5 region of the brain. This established that V5 is where this prediction mechanism takes place, they said.

In a study published earlier this year, Maus and his fellow researchers pinpointed the V5 region of the brain as the most likely location of this motion prediction process by successfully using transcranial magnetic stimulation, a non-invasive brain stimulation technique, to interfere with neural activity in the V5 region of the brain, and disrupt this visual position-shifting mechanism.

“Now not only can we see the outcome of prediction in area V5,” Maus said. “But we can also show that it is causally involved in enabling us to see objects accurately in predicted positions.”

On a more evolutionary level, the latest findings reinforce that it is actually advantageous not to see everything exactly as it is. In fact, it’s necessary to our survival:

“The image that hits the eye and then is processed by the brain is not in sync with the real world, but the brain is clever enough to compensate for that,” Maus said. “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”

(Source: newscenter.berkeley.edu)

Filed under motion perception brain activity brain circuits visual cortex fMRI psychology neuroscience science

121 notes

Lost your keys? Your cat? The brain can rapidly mobilize a search party

A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

image

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects such as animals, or even the areas governing abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.

“Our results show that our brains are much more dynamic than previously thought, rapidly reallocating resources based on behavioral demands, and optimizing our performance by increasing the precision with which we can perform relevant tasks,” said Tolga Cukur, a postdoctoral researcher in neuroscience at UC Berkeley and lead author of the study published today (Sunday April 21) in the journal Nature Neuroscience.

“As you plan your day at work, for example, more of the brain is devoted to processing time, tasks, goals and rewards, and as you search for your cat, more of the brain becomes involved in recognition of animals,” he added.

The findings help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.

These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.

The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

image

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become attuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.

“These changes occur across many brain regions, not only those devoted to vision. In fact, the largest changes are seen in the prefrontal cortex, which is usually thought to be involved in abstract thought, long-term planning, and other complex mental tasks,” Cukur said.

The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of the visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.

The latest study goes further to show how the brain’s semantic space is warped during a visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto. Funding for the research was provided by the National Eye Institute of the National Institutes of Health.

Filed under brain brain activity fMRI prefrontal cortex visual cortex neuroscience science

80 notes

Reward linked to image is enough to activate brain’s visual cortex
Once rhesus monkeys learn to associate a picture with a reward, the reward by itself becomes enough to alter the activity in the monkeys’ visual cortex. This finding was made by neurophysiologists Wim Vanduffel and John Arsenault (KU Leuven and Harvard Medical School) and American colleagues using functional brain scans and was published recently in the leading journal Neuron.
Our visual perception is not determined solely by retinal activity. Other factors also influence the processing of visual signals in the brain. “Selective attention is one such factor,” says Professor Wim Vanduffel. “The more attention you pay to a stimulus, the better your visual perception is and the more effective your visual cortex is at processing that stimulus. Another factor is the reward value of a stimulus: when a visual signal becomes associated with a reward, it affects our processing of that visual signal. In this study, we wanted to investigate how a reward influences activity in the visual cortex.”
Pavlov inverted
To do this, the researchers used a variant of Pavlov’s well-known conditioning experiment: “Think of Pavlov giving a dog a treat after ringing a bell. The bell is the stimulus and the food is the reward. Eventually the dogs learned to associate the bell with the food and salivated at the sound of the bell alone. Essentially, Pavlov removed the reward but kept the stimulus. In this study, we removed the stimulus but kept the reward.”
In the study, the rhesus monkeys first encountered images projected on a screen followed by a juice reward (classical conditioning). Later, the monkeys received juice rewards while viewing a blank screen. fMRI brain scans taken during this experiment showed that the visual cortex of the monkeys was activated by being rewarded in the absence of any image.
Importantly, these activations were not spread throughout the whole visual system but were instead confined to the specific brain regions responsible for processing the exact stimulus used earlier during conditioning. This result shows that information about rewards is being sent to the visual cortex to indicate which stimuli have been associated with rewards.
Equally surprising, these reward-only trials were found to strengthen the cue-reward associations. This is more or less the equivalent to giving Pavlov’s dog an extra treat after a conditioning session and noticing the next day that he salivates twice as much as before. More generally, this result suggests that rewards can be associated with stimuli over longer time scales than previously thought.
Dopamine
Why does the visual cortex react selectively in the absence of a visual stimulus on the retina? One potential explanation is dopamine. “Dopamine is a signalling chemical (neurotransmitter) in nerve cells and plays an important role in processing rewards, motivation, and motor functions. Dopamine’s role in reward signalling is the reason some Parkinson’s patients fall into gambling addiction after taking dopamine-increasing drugs. Aware of dopamine’s role in reward, we re-ran our experiments after giving the monkeys a small dose of a drug that blocks dopamine signalling. We found that the activations in the visual cortex were reduced by the dopamine blocker. What’s likely happening here is that a reward signal is being sent to the visual cortex via dopamine,” says Professor Vanduffel.
The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to the brain activity associated with a given task. In this way, task-specific activity can be tracked.

Reward linked to image is enough to activate brain’s visual cortex

Once rhesus monkeys learn to associate a picture with a reward, the reward by itself becomes enough to alter the activity in the monkeys’ visual cortex. This finding was made by neurophysiologists Wim Vanduffel and John Arsenault (KU Leuven and Harvard Medical School) and American colleagues using functional brain scans and was published recently in the leading journal Neuron.

Our visual perception is not determined solely by retinal activity. Other factors also influence the processing of visual signals in the brain. “Selective attention is one such factor,” says Professor Wim Vanduffel. “The more attention you pay to a stimulus, the better your visual perception is and the more effective your visual cortex is at processing that stimulus. Another factor is the reward value of a stimulus: when a visual signal becomes associated with a reward, it affects our processing of that visual signal. In this study, we wanted to investigate how a reward influences activity in the visual cortex.”

Pavlov inverted

To do this, the researchers used a variant of Pavlov’s well-known conditioning experiment: “Think of Pavlov giving a dog a treat after ringing a bell. The bell is the stimulus and the food is the reward. Eventually the dogs learned to associate the bell with the food and salivated at the sound of the bell alone. Essentially, Pavlov removed the reward but kept the stimulus. In this study, we removed the stimulus but kept the reward.”

In the study, the rhesus monkeys first encountered images projected on a screen followed by a juice reward (classical conditioning). Later, the monkeys received juice rewards while viewing a blank screen. fMRI brain scans taken during this experiment showed that the visual cortex of the monkeys was activated by being rewarded in the absence of any image.

Importantly, these activations were not spread throughout the whole visual system but were instead confined to the specific brain regions responsible for processing the exact stimulus used earlier during conditioning. This result shows that information about rewards is being sent to the visual cortex to indicate which stimuli have been associated with rewards.

Equally surprising, these reward-only trials were found to strengthen the cue-reward associations. This is more or less the equivalent to giving Pavlov’s dog an extra treat after a conditioning session and noticing the next day that he salivates twice as much as before. More generally, this result suggests that rewards can be associated with stimuli over longer time scales than previously thought.

Dopamine

Why does the visual cortex react selectively in the absence of a visual stimulus on the retina? One potential explanation is dopamine. “Dopamine is a signalling chemical (neurotransmitter) in nerve cells and plays an important role in processing rewards, motivation, and motor functions. Dopamine’s role in reward signalling is the reason some Parkinson’s patients fall into gambling addiction after taking dopamine-increasing drugs. Aware of dopamine’s role in reward, we re-ran our experiments after giving the monkeys a small dose of a drug that blocks dopamine signalling. We found that the activations in the visual cortex were reduced by the dopamine blocker. What’s likely happening here is that a reward signal is being sent to the visual cortex via dopamine,” says Professor Vanduffel.

The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to the brain activity associated with a given task. In this way, task-specific activity can be tracked.

Filed under primates visual cortex visual perception selective attention neuroscience psychology science

157 notes

Neuroprosthesis gives rats the ability to ‘touch’ infrared light 
Researchers have given rats the ability to “touch” infrared light, normally invisible to them, by fitting them with an infrared detector wired to microscopic electrodes implanted in the part of the mammalian brain that processes tactile information. The achievement represents the first time a brain-machine interface has augmented a sense in adult animals, said Duke University neurobiologist Miguel Nicolelis, who led the research team.
The experiment also demonstrated for the first time that a novel sensory input could be processed by a cortical region specialized in another sense without “hijacking” the function of this brain area said Nicolelis. This discovery suggests, for example, that a person whose visual cortex was damaged could regain sight through a neuroprosthesis implanted in another cortical region, he said.
Although the initial experiments tested only whether rats could detect infrared light, there seems no reason that these animals in the future could not be given full-fledged infrared vision, said Nicolelis. For that matter, cortical neuroprostheses could be developed to give animals or humans the ability to see in any region of the electromagnetic spectrum, or even magnetic fields. “We could create devices sensitive to any physical energy,” he said. “It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”
Nicolelis and colleagues Eric Thomson and Rafael Carra published their findings February 12, 2013 in the online journal Nature Communications. Their research was sponsored by the National Institute of Mental Health.

Neuroprosthesis gives rats the ability to ‘touch’ infrared light

Researchers have given rats the ability to “touch” infrared light, normally invisible to them, by fitting them with an infrared detector wired to microscopic electrodes implanted in the part of the mammalian brain that processes tactile information. The achievement represents the first time a brain-machine interface has augmented a sense in adult animals, said Duke University neurobiologist Miguel Nicolelis, who led the research team.

The experiment also demonstrated for the first time that a novel sensory input could be processed by a cortical region specialized in another sense without “hijacking” the function of this brain area said Nicolelis. This discovery suggests, for example, that a person whose visual cortex was damaged could regain sight through a neuroprosthesis implanted in another cortical region, he said.

Although the initial experiments tested only whether rats could detect infrared light, there seems no reason that these animals in the future could not be given full-fledged infrared vision, said Nicolelis. For that matter, cortical neuroprostheses could be developed to give animals or humans the ability to see in any region of the electromagnetic spectrum, or even magnetic fields. “We could create devices sensitive to any physical energy,” he said. “It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

Nicolelis and colleagues Eric Thomson and Rafael Carra published their findings February 12, 2013 in the online journal Nature Communications. Their research was sponsored by the National Institute of Mental Health.

Filed under mammalian brain infrared light visual cortex CNS BMI neuroprosthesis neuroscience science

free counters