Neuroscience

Articles and news from the latest research reports.

Posts tagged visual cortex

99 notes

Researchers observe brain development in utero
New investigation methods using functional magnetic resonance tomography (fMRT) offer insights into fetal brain development. These “in vivo” observations will uncover different stages of the brain’s development. A research group at the Computational Imaging Research Lab from the MedUni Vienna has observed that parts of the brain that are later responsible for sight are already active at this stage. 
To obtain insights into the development of the human brain in utero, the study group observed 32 fetuses from the 21st to 38th week of pregnancy (an average pregnancy lasts 40 weeks). The architecture of the brain is developed particularly during the middle trimester of pregnancy. Using functional magnetic resonance tomography, it was possible to measure activity and thereby gain information about the most important cortical and sub-cortical structures of the developing brain. During the period of the 26th to 29th week of pregnancy in particular, short-range neuronal connections developed especially actively, while in contrast to this, long-range nerve connections exhibited more linear growth during pregnancy. “It became apparent that the areas responsible for sensory perception are developed first and only then, around four weeks later, do the areas responsible for more complex, cognitive skills come along,” says first author Andras Jakab from the Computational Imaging Research Lab at the MedUni Vienna, explaining the results.
In another study, the study group led by Veronika Schöpf and Georg Langs was able to demonstrate for a correlation of eye movement and areas of the brain which are later responsible to process vision as early as the 30th to the 36th weeks of pregnancy. The fact that newborn babies first have to learn to “process” visual stimuli after birth is already known. It has now been possible to demonstrate that this important development starts even before birth. The research group investigated the relationship between eye movements and brain activity. Even at this stage of development, motor visual movement is linked to the areas in the visual cortex of the brain responsible for processing optical signals. “The relationship between eye movement and the responsible areas of the brain has therefore been demonstrated for the first time in utero”, explains first author Veronika Schöpf.

Researchers observe brain development in utero

New investigation methods using functional magnetic resonance tomography (fMRT) offer insights into fetal brain development. These “in vivo” observations will uncover different stages of the brain’s development. A research group at the Computational Imaging Research Lab from the MedUni Vienna has observed that parts of the brain that are later responsible for sight are already active at this stage.

To obtain insights into the development of the human brain in utero, the study group observed 32 fetuses from the 21st to 38th week of pregnancy (an average pregnancy lasts 40 weeks). The architecture of the brain is developed particularly during the middle trimester of pregnancy. Using functional magnetic resonance tomography, it was possible to measure activity and thereby gain information about the most important cortical and sub-cortical structures of the developing brain. During the period of the 26th to 29th week of pregnancy in particular, short-range neuronal connections developed especially actively, while in contrast to this, long-range nerve connections exhibited more linear growth during pregnancy. “It became apparent that the areas responsible for sensory perception are developed first and only then, around four weeks later, do the areas responsible for more complex, cognitive skills come along,” says first author Andras Jakab from the Computational Imaging Research Lab at the MedUni Vienna, explaining the results.

In another study, the study group led by Veronika Schöpf and Georg Langs was able to demonstrate for a correlation of eye movement and areas of the brain which are later responsible to process vision as early as the 30th to the 36th weeks of pregnancy. The fact that newborn babies first have to learn to “process” visual stimuli after birth is already known. It has now been possible to demonstrate that this important development starts even before birth. The research group investigated the relationship between eye movements and brain activity. Even at this stage of development, motor visual movement is linked to the areas in the visual cortex of the brain responsible for processing optical signals. “The relationship between eye movement and the responsible areas of the brain has therefore been demonstrated for the first time in utero”, explains first author Veronika Schöpf.

Filed under brain development prenatal development brain activity visual cortex eye movement neuroscience science

96 notes

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)
Waking up the visual system
The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.
The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.
Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.
These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.
The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)

Waking up the visual system

The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.

The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.

Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.

These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.

The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

Filed under visual cortex visual system neural activity neurons cholinergic projections neuroscience science

174 notes

Brain mechanism underlying the recognition of hand gestures develops even when blind
Does a distinctive mechanism work in the brain of congenitally blind individuals when understanding and learning others’ gestures? Or does the same mechanism as with sighted individuals work? Japanese researchers figured out that activated brain regions of congenitally blind individuals and activated brain regions of sighted individuals share common regions when recognizing human hand gestures. They indicated that a region of the neural network that recognizes others’ hand gestures is formed in the same way even without visual information. The findings are discussed in The Journal of Neuroscience.
Our brain mechanism perceives human bodies from inanimate objects and shows a particular response. A part of a region of the “visual cortex” that processes visual information supports this mechanism. Since visual information is largely used in perception, this is reasonable, however, for perception using haptic information and also for the recognition of one’s own gestures, it has been recently learned that the same brain region is activated. It came to be considered that there is a mechanism that is formed regardless of the sensory modalities and recognizes human bodies.
Blind and sighted individuals participated in the study of the research group of Assistant Professor Ryo Kitada of the National Institute for Physiological Sciences, National Institutes of Natural Sciences. With their eyes closed, they were instructed to touch plastic casts of hands, teapots, and toy cars and identify the shape. As it turned out, sighted individuals and blind individuals could make an identification with the same accuracy. Through measuring the activated brain region using functional magnetic resonance imaging (fMRI), for plastic casts of hands and not for teapots or toy cars, the research group was able to pinpoint a common activated brain region regardless of visual experience. On another front, it also revealed a region showing signs of activity that is dependent on the duration of the visual experience and it was also learned that this region functions as a supplement when recognizing hand gestures.
As Assistant Professor Ryo Kitada notes, “Many individuals are active in many parts of the society even with the loss of their sight as a child. Developmental psychology has been advancing its doctrine based on sighted individuals. I wish this finding will help us grasp how blind individuals understand and learn about others and be seen as an important step in supporting the development of social skills for blind individuals.”

Brain mechanism underlying the recognition of hand gestures develops even when blind

Does a distinctive mechanism work in the brain of congenitally blind individuals when understanding and learning others’ gestures? Or does the same mechanism as with sighted individuals work? Japanese researchers figured out that activated brain regions of congenitally blind individuals and activated brain regions of sighted individuals share common regions when recognizing human hand gestures. They indicated that a region of the neural network that recognizes others’ hand gestures is formed in the same way even without visual information. The findings are discussed in The Journal of Neuroscience.

Our brain mechanism perceives human bodies from inanimate objects and shows a particular response. A part of a region of the “visual cortex” that processes visual information supports this mechanism. Since visual information is largely used in perception, this is reasonable, however, for perception using haptic information and also for the recognition of one’s own gestures, it has been recently learned that the same brain region is activated. It came to be considered that there is a mechanism that is formed regardless of the sensory modalities and recognizes human bodies.

Blind and sighted individuals participated in the study of the research group of Assistant Professor Ryo Kitada of the National Institute for Physiological Sciences, National Institutes of Natural Sciences. With their eyes closed, they were instructed to touch plastic casts of hands, teapots, and toy cars and identify the shape. As it turned out, sighted individuals and blind individuals could make an identification with the same accuracy. Through measuring the activated brain region using functional magnetic resonance imaging (fMRI), for plastic casts of hands and not for teapots or toy cars, the research group was able to pinpoint a common activated brain region regardless of visual experience. On another front, it also revealed a region showing signs of activity that is dependent on the duration of the visual experience and it was also learned that this region functions as a supplement when recognizing hand gestures.

As Assistant Professor Ryo Kitada notes, “Many individuals are active in many parts of the society even with the loss of their sight as a child. Developmental psychology has been advancing its doctrine based on sighted individuals. I wish this finding will help us grasp how blind individuals understand and learn about others and be seen as an important step in supporting the development of social skills for blind individuals.”

Filed under haptics hand gestures visual cortex blindness brain activity neuroscience science

130 notes

How the Brain Finds What It’s Looking For

Despite the barrage of visual information the brain receives, it retains a remarkable ability to focus on important and relevant items. This fall, for example, NFL quarterbacks will be rewarded handsomely for how well they can focus their attention on color and motion – being able to quickly judge the jersey colors of teammates and opponents and where they’re headed is a valuable skill. How the brain accomplishes this feat, however, has been poorly understood.

image

Now, University of Chicago scientists have identified a brain region that appears central to perceiving the combination of color and motion. They discovered a unique population of neurons that shift in sensitivity toward different colors and directions depending on what is being attended – the red jersey of a receiver headed toward an end zone, for example. The study, published Sept. 4 in the journal Neuron, sheds light on a fundamental neurological process that is a key step in the biology of attention.

“Most of the objects in any given visual scene are not that important, so how does the brain select or attend to important ones?” said study senior author David Freedman, PhD, associate professor of neurobiology at the University of Chicago. “We’ve zeroed in on an area of the brain that appears central to this process. It does this in a very flexible way, changing moment by moment depending on what is being looked for.”

The visual cortex of the brain possesses multiple, interconnected regions that are responsible for processing different aspects of the raw visual signal gathered by the eyes. Basic information on motion and color are known to route through two such regions, but how the brain combines these streams into something usable for decision-making or other higher-order processes remained unclear.

To investigate this process, Freedman and postdoctoral fellow Guilhem Ibos, PhD, studied the response of individual neurons during a simple task. Monkeys were shown a rapid series of visual images. An initial image showed either a group of red dots moving upwards or yellow dots moving downwards, which served as an instruction for which specific colors and directions were relevant during that trial. The subjects were rewarded when they released a lever when this image later reappeared. Subsequent images were composed of different colors of dots moving in different directions, among which was the initial image.

Dynamic neurons

Freedman and Ibos looked at neurons in the lateral intraparietal area (LIP), a region highly interconnected with brain areas involved in vision, motor control and cognitive functions. As subjects performed the task and looked for a specific combination of color and motion, LIP neurons became highly active. They did not respond, however, when the subjects passively viewed the same images without an accompanying task.

When the team further investigated the responses of LIP neurons, they discovered that the neurons possessed a unique characteristic. Individual neurons shifted their sensitivity to color and direction toward the relevant color and motion features for that trial. When the subject looked for red dots moving upwards, for example, a neuron would respond strongly to directions close to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.

“Shifts in feature tuning had been postulated a long time ago by theoretical studies,” Ibos said. “This is the first time that neurons in the brain have been shown to shift their selectivity depending on which features are relevant to solve a task.”

Freedman and Ibos developed a model for how the LIP brings together both basic color and motion information. Attention likely affects that process through signals from higher-order areas of the brain that affect LIP neuron selectivity. The team believes that this region plays an important role in making sense of basic sensory information, and they are trying to better understand the brain-wide neuronal circuitry involved in this process.

“Our study suggests that this area of the brain brings together information from multiple areas throughout the brain,” Freedman said. “It integrates inputs – visual, motor, cognitive inputs related to memory and decision making – and represents them in a way that helps solve the task at hand.”

(Source: newswise.com)

Filed under visual system visual cortex parietal cortex neurons neuroscience science

106 notes

(Image caption: A thalamocortical, or TC neuron labeled with fluorescent dye, as used in Dr. Augustinaite’s study. The image shows a voltage recording device, at bottom left, entering the yellow cell body, and a stimulation device, at top, reaching the dendrites. Color in this image shows the depth in the slice.)
To See or Not to See
The brain is a complicated network of small units called neurons, all working to carry information from the outside world, create an internal model, and generate a response. Neurons sense a signal through branching dendrites, carry this signal to the cell body, and send it onwards through a long axon to signal the next neuron. However, neurons can function in many different ways; some of which researchers are still exploring. Some signals that the dendrites receive do not continue to the next neuron; instead they seem to change the way that the neuron handles the subsequent signals. This could help neurons function as part of a large network, but researchers still have many questions. Dr. Sigita Augustinaite, a researcher in the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University, suggested one mechanism explaining how neurons help the network function. Her findings, part of collaboration between the University of Oslo and OIST, were published August 13, 2014 as the cover article in The Journal of Neuroscience.
Dr. Augustinaite studies the visual pathway, where signals from the retina are sent to the visual cortex, where the brain interprets signals from the eye. Between the eye and the visual cortex, the signals must pass through the visual thalamus, that is, through thalamocortical, or TC neurons. These neurons can switch between a “sleeping” state and a “waking” state depending on input they receive from neurons and other brain areas. When an animal is awake, TC neurons transmit the incoming retinal signals on to the cortex, but when the animal is asleep, the neurons block retinal signals.
The visual cortex also sends a massive input back to TC neurons to control retinal signals traveling through the thalamus. But Dr. Augustinaite says that the suggested mechanisms of this control bring more questions than answers. To understand more, she conducted experiments in acute brain slices, small pieces of brain tissue where neurons stay alive and maintain their physiological properties. She added glutamate to dendrites far from the cell body to emulate a feedback signal from the visual cortex. Then she measured the neuron’s response, shown as a voltage difference between inside and outside of the membrane.
Dr. Augustinaite found that stimulating the neurons in this way depolarizes their membranes, creating something called NMDA spike/plateau potentials. If strong enough, depolarization can cause a neuron to fire an action potential, which travels through the axon to activate other neurons. Action potentials look like a sharp, one-millisecond increase in membrane voltage, and they transmit signals from retina to cortex. But if NMDA spike/plateaus induces action potentials, signals from the cortex and signals from the retina would be indistinguishable. With her experiments, Dr. Augustinaite showed that the NMDA spike/plateau potentials in TC neurons do not trigger action potentials. Instead, they lift the voltage of the membrane, changing the neuron’s properties for few hundred milliseconds, creating conditions for reliable signal transmission from retina to cortex.
“The research gives, for the first time, a clear view on what dendritic potentials are good for,” explained Prof. Bernd Kuhn, who leads the lab where Dr. Augustinaite works. “It points directly to the mechanism,” he concluded. Showing how dendritic plateaus function is just one important step toward understanding how neurons function as a network. “This mechanism could also be used in many other neuronal circuits, where one input regulates how another input moves through the network,” Dr. Augustinaite said. “This mechanism is an exciting logical element in the neuronal network, but just the start of putting the puzzle together.”

(Image caption: A thalamocortical, or TC neuron labeled with fluorescent dye, as used in Dr. Augustinaite’s study. The image shows a voltage recording device, at bottom left, entering the yellow cell body, and a stimulation device, at top, reaching the dendrites. Color in this image shows the depth in the slice.)

To See or Not to See

The brain is a complicated network of small units called neurons, all working to carry information from the outside world, create an internal model, and generate a response. Neurons sense a signal through branching dendrites, carry this signal to the cell body, and send it onwards through a long axon to signal the next neuron. However, neurons can function in many different ways; some of which researchers are still exploring. Some signals that the dendrites receive do not continue to the next neuron; instead they seem to change the way that the neuron handles the subsequent signals. This could help neurons function as part of a large network, but researchers still have many questions. Dr. Sigita Augustinaite, a researcher in the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University, suggested one mechanism explaining how neurons help the network function. Her findings, part of collaboration between the University of Oslo and OIST, were published August 13, 2014 as the cover article in The Journal of Neuroscience.

Dr. Augustinaite studies the visual pathway, where signals from the retina are sent to the visual cortex, where the brain interprets signals from the eye. Between the eye and the visual cortex, the signals must pass through the visual thalamus, that is, through thalamocortical, or TC neurons. These neurons can switch between a “sleeping” state and a “waking” state depending on input they receive from neurons and other brain areas. When an animal is awake, TC neurons transmit the incoming retinal signals on to the cortex, but when the animal is asleep, the neurons block retinal signals.

The visual cortex also sends a massive input back to TC neurons to control retinal signals traveling through the thalamus. But Dr. Augustinaite says that the suggested mechanisms of this control bring more questions than answers. To understand more, she conducted experiments in acute brain slices, small pieces of brain tissue where neurons stay alive and maintain their physiological properties. She added glutamate to dendrites far from the cell body to emulate a feedback signal from the visual cortex. Then she measured the neuron’s response, shown as a voltage difference between inside and outside of the membrane.

Dr. Augustinaite found that stimulating the neurons in this way depolarizes their membranes, creating something called NMDA spike/plateau potentials. If strong enough, depolarization can cause a neuron to fire an action potential, which travels through the axon to activate other neurons. Action potentials look like a sharp, one-millisecond increase in membrane voltage, and they transmit signals from retina to cortex. But if NMDA spike/plateaus induces action potentials, signals from the cortex and signals from the retina would be indistinguishable. With her experiments, Dr. Augustinaite showed that the NMDA spike/plateau potentials in TC neurons do not trigger action potentials. Instead, they lift the voltage of the membrane, changing the neuron’s properties for few hundred milliseconds, creating conditions for reliable signal transmission from retina to cortex.

“The research gives, for the first time, a clear view on what dendritic potentials are good for,” explained Prof. Bernd Kuhn, who leads the lab where Dr. Augustinaite works. “It points directly to the mechanism,” he concluded. Showing how dendritic plateaus function is just one important step toward understanding how neurons function as a network. “This mechanism could also be used in many other neuronal circuits, where one input regulates how another input moves through the network,” Dr. Augustinaite said. “This mechanism is an exciting logical element in the neuronal network, but just the start of putting the puzzle together.”

Filed under neurons action potentials neural circuits dendritic integration visual cortex neuroscience science

167 notes

Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory
Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function.
Full Article
(Image: Shutterstock)

Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory

Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function.

Full Article

(Image: Shutterstock)

Filed under working memory visual cortex gray matter cortical thickness neuroscience science

91 notes

Are Three Brain Imaging Techniques Better than One?
Many recent imaging studies have shown that in children with autism, different parts of the brain do not connect with each other in typical ways. Initially, most researchers thought that the autistic brain has fewer connections between key regions. The most recent studies, however, point to an opposite conclusion: The brains of people with autism exhibit overconnectivity. 
To date, almost all studies of autism in children have used a single imaging technique to explore connectivity. None has been able to capture a robust picture of the brain abnormalities associated with autism—until now. 
Two new grants from the National Institute of Mental Health (NIMH) will allow San Diego State University Psychology Professor Ralph-Axel Müller to combine three imaging techniques and harness the best of each one in his study of autism.
Techniques in tandem
Although the term “brain imaging” gets thrown around a lot when describing the latest advances in neuroscience and psychology, there are dozens of different brain imaging techniques. Each gives scientists a different view of the inner workings of the brain, and each comes with its own strengths and limitations. 
For example, the frequently cited technique of fMRI, or functional magnetic resonance imaging, measures blood flow in different areas of the brain at specific snapshots in time, based on the knowledge that increased blood flow indicates increased activity of nerve cells in that area of the brain. The technique is powerful, but has limitations when it comes to detecting dynamic changes in brain activity that occur very fast, within milliseconds. 
EEG (electroencephalography), a much older technique, is actually better at detecting such dynamic changes, although it cannot pinpoint exactly where in the brain the activity occurs. A powerful and more recent technique is MEG, or magnetoencephalography, which can detect dynamic changes in brain activity that happen within a few milliseconds.
Müller looks for disorganized patterns of brain activity that could be responsible for some of the telltale characteristics of autism spectrum disorder, such as inattention to social cues and repetitive and obsessive behaviors. For example, last year, Müller and his colleagues discovered that in children with autism, connectivity was impaired between the cerebral cortex and the thalamus, a deep brain structure that is important for sensorimotor functions and attention.
With $4.2 million in new funding from NIH, Müller—together with collaborators Ksenija Marinkovic at SDSU and Thomas Liu at the University of California, San Diego—will apply fMRI, EEG, and MEG to study both autistic and non-autistic, or typically-developing, children and adolescents during a variety of tests, including language tests designed to tease out activity in various parts of the brain. 
Defining the differences
One component of the project will concern the visual system. Previous research has shown that people with autism rely on their visual cortex more than typically- developing people during thought processes, for example, when making a semantic distinction, such as deciding whether a truck is a vehicle. Using the one-two punch of fMRI and MEG together, Müller and his team will be able to determine the dynamic processes in how brain regions work together to come up with a response, and how these processes differ in autism. 
The study will also examine brain function during its resting state in order to identify abnormalities in brain network organization. The combined use of EEG and MEG, together with fMRI techniques that reveal brain anatomy, will produce a much more complete picture of abnormal brain organization in autism.
Ultimately, Müller and his colleagues hope to identify biomarkers in the brain that can reliably indicate whether the participant falls on the autism spectrum.
“Autism is a brain-based disorder, but its diagnosis is still based entirely on behavioral observation,” Müller said. “This is inadequate. We need to find brain biomarkers for autism.”
Another goal of the researchers is to find brain biomarkers that can distinguish different subtypes of autism. It is generally suspected that the term “autism” actually covers several different disorders, each of which may be caused by different genetic and environmental risk factors. Eventually, brain biomarkers might be tied to genetic data, giving scientists a better understanding of the origins of autism, as well as new leads for treatment.
“For decades, research teams studying autism have specialized in one or another scientific technique, often without understanding well what other techniques can reveal. Our study combining several of the major imaging techniques will be one step toward a more comprehensive account of how the autistic brain differs from the typically developing one – and what may be done about it,” Müller said.

Are Three Brain Imaging Techniques Better than One?

Many recent imaging studies have shown that in children with autism, different parts of the brain do not connect with each other in typical ways. Initially, most researchers thought that the autistic brain has fewer connections between key regions. The most recent studies, however, point to an opposite conclusion: The brains of people with autism exhibit overconnectivity.

To date, almost all studies of autism in children have used a single imaging technique to explore connectivity. None has been able to capture a robust picture of the brain abnormalities associated with autism—until now.

Two new grants from the National Institute of Mental Health (NIMH) will allow San Diego State University Psychology Professor Ralph-Axel Müller to combine three imaging techniques and harness the best of each one in his study of autism.

Techniques in tandem

Although the term “brain imaging” gets thrown around a lot when describing the latest advances in neuroscience and psychology, there are dozens of different brain imaging techniques. Each gives scientists a different view of the inner workings of the brain, and each comes with its own strengths and limitations.

For example, the frequently cited technique of fMRI, or functional magnetic resonance imaging, measures blood flow in different areas of the brain at specific snapshots in time, based on the knowledge that increased blood flow indicates increased activity of nerve cells in that area of the brain. The technique is powerful, but has limitations when it comes to detecting dynamic changes in brain activity that occur very fast, within milliseconds.

EEG (electroencephalography), a much older technique, is actually better at detecting such dynamic changes, although it cannot pinpoint exactly where in the brain the activity occurs. A powerful and more recent technique is MEG, or magnetoencephalography, which can detect dynamic changes in brain activity that happen within a few milliseconds.

Müller looks for disorganized patterns of brain activity that could be responsible for some of the telltale characteristics of autism spectrum disorder, such as inattention to social cues and repetitive and obsessive behaviors. For example, last year, Müller and his colleagues discovered that in children with autism, connectivity was impaired between the cerebral cortex and the thalamus, a deep brain structure that is important for sensorimotor functions and attention.

With $4.2 million in new funding from NIH, Müller—together with collaborators Ksenija Marinkovic at SDSU and Thomas Liu at the University of California, San Diego—will apply fMRI, EEG, and MEG to study both autistic and non-autistic, or typically-developing, children and adolescents during a variety of tests, including language tests designed to tease out activity in various parts of the brain.

Defining the differences

One component of the project will concern the visual system. Previous research has shown that people with autism rely on their visual cortex more than typically- developing people during thought processes, for example, when making a semantic distinction, such as deciding whether a truck is a vehicle. Using the one-two punch of fMRI and MEG together, Müller and his team will be able to determine the dynamic processes in how brain regions work together to come up with a response, and how these processes differ in autism.

The study will also examine brain function during its resting state in order to identify abnormalities in brain network organization. The combined use of EEG and MEG, together with fMRI techniques that reveal brain anatomy, will produce a much more complete picture of abnormal brain organization in autism.

Ultimately, Müller and his colleagues hope to identify biomarkers in the brain that can reliably indicate whether the participant falls on the autism spectrum.

“Autism is a brain-based disorder, but its diagnosis is still based entirely on behavioral observation,” Müller said. “This is inadequate. We need to find brain biomarkers for autism.”

Another goal of the researchers is to find brain biomarkers that can distinguish different subtypes of autism. It is generally suspected that the term “autism” actually covers several different disorders, each of which may be caused by different genetic and environmental risk factors. Eventually, brain biomarkers might be tied to genetic data, giving scientists a better understanding of the origins of autism, as well as new leads for treatment.

“For decades, research teams studying autism have specialized in one or another scientific technique, often without understanding well what other techniques can reveal. Our study combining several of the major imaging techniques will be one step toward a more comprehensive account of how the autistic brain differs from the typically developing one – and what may be done about it,” Müller said.

Filed under autism brain imaging brain activity ASD visual cortex neuroscience science

460 notes

Important advance in brain mapping and memory
“When a tiger starts to move towards you, you need to know whether it is something you are actually seeing or whether it’s just something that you remember or have imagined,” says Prof. Julio Martinez-Trujillo of McGill’s Department of Physiology. The researcher and his team have discovered that there is a clear frontier in the brain between the area that encodes information about what is immediately before the eyes and the area that encodes the abstract representations that are the product of our short-term memory or imagination. It is an important advance in brain mapping and opens the doors to further research in the area of short-term memory.
These finding, which are described in an article just published in Nature Neuroscience, resolve a question that has occupied neuroscientists for years. Namely that of how and where exactly in the brain the visual information coming from our eyes is first transformed into short-term memories. “We found that while one area in the brain processes information about what we are currently seeing, an area right beside it stores the information in short-term memory,” says McGill PhD student Diego Mendoza-Halliday, first author of the article.  “What is so exciting about this finding is that until now, no one knew the place where visual information first gets transformed into short-term memory.”
The researchers arrived at this conclusion by measuring the neuronal activity in these two areas in the brains of macaques as they first looked at, and then after a short time (1.2 - 2 seconds) remembered, a random sequence of dots moving across a computer screen like rainfall. What surprised Martinez-Trujillo and his team was how clearly demarcated the divide was between the activities and functions of the two brain areas, and this despite the fact that they lie side-by-side.
“It is rare to find this kind of sharp boundary in biological systems of any kind,” says Martinez-Trujillo. “Most of the time, when you look at the function of different brain areas, there is more of a transitional zone, more grey and not such a clear border between black and white. I think the evolutionary reason for this clear frontier is that it helped us to survive in dangerous situations.”
The discovery comes after five years spent by Martinez-Trujillo and his team doing research in the area. Despite this fact, he acknowledges that there was a certain amount of serendipity, and a lot of technological help involved in being able to capture a signal that travels for 3 milliseconds and fires synapses in neurons that lie right beside one another.
Martinez-Trujillo and his team continue to work on mapping the receptors and connectivity between these two areas of the brain. But what is most important for him is to try and relate this discovery to schizophrenia and other diseases that involve hallucinations, and in order to do so he is working with a psychiatrist at Montreal’s Douglas Hospital.
(Image: Bigstock)

Important advance in brain mapping and memory

“When a tiger starts to move towards you, you need to know whether it is something you are actually seeing or whether it’s just something that you remember or have imagined,” says Prof. Julio Martinez-Trujillo of McGill’s Department of Physiology. The researcher and his team have discovered that there is a clear frontier in the brain between the area that encodes information about what is immediately before the eyes and the area that encodes the abstract representations that are the product of our short-term memory or imagination. It is an important advance in brain mapping and opens the doors to further research in the area of short-term memory.

These finding, which are described in an article just published in Nature Neuroscience, resolve a question that has occupied neuroscientists for years. Namely that of how and where exactly in the brain the visual information coming from our eyes is first transformed into short-term memories. “We found that while one area in the brain processes information about what we are currently seeing, an area right beside it stores the information in short-term memory,” says McGill PhD student Diego Mendoza-Halliday, first author of the article.  “What is so exciting about this finding is that until now, no one knew the place where visual information first gets transformed into short-term memory.”

The researchers arrived at this conclusion by measuring the neuronal activity in these two areas in the brains of macaques as they first looked at, and then after a short time (1.2 - 2 seconds) remembered, a random sequence of dots moving across a computer screen like rainfall. What surprised Martinez-Trujillo and his team was how clearly demarcated the divide was between the activities and functions of the two brain areas, and this despite the fact that they lie side-by-side.

“It is rare to find this kind of sharp boundary in biological systems of any kind,” says Martinez-Trujillo. “Most of the time, when you look at the function of different brain areas, there is more of a transitional zone, more grey and not such a clear border between black and white. I think the evolutionary reason for this clear frontier is that it helped us to survive in dangerous situations.”

The discovery comes after five years spent by Martinez-Trujillo and his team doing research in the area. Despite this fact, he acknowledges that there was a certain amount of serendipity, and a lot of technological help involved in being able to capture a signal that travels for 3 milliseconds and fires synapses in neurons that lie right beside one another.

Martinez-Trujillo and his team continue to work on mapping the receptors and connectivity between these two areas of the brain. But what is most important for him is to try and relate this discovery to schizophrenia and other diseases that involve hallucinations, and in order to do so he is working with a psychiatrist at Montreal’s Douglas Hospital.

(Image: Bigstock)

Filed under STM visual cortex brain activity visual memory working memory neuroscience science

262 notes

A weighty discovery
Humans have developed sophisticated concepts like mass and gravity to explain a wide range of everyday phenomena, but scientists have remarkably little understanding of how such concepts are represented by the brain.

Using advanced neuroimaging techniques, Queen’s University researchers have revealed how the brain stores knowledge about an object’s weight – information critical to our ability to successfully grasp and interact with objects in our environment.
Jason Gallivan, a Banting postdoctoral fellow in the Department of Psychology, and Randy Flanagan, a professor in the Department of Psychology, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain represent an object’s weight prior to lifting that object. They found that knowledge of object weight is stored in ventral visual cortex, a brain region previously thought to only represent those properties of an object that can be directly viewed such as its size, shape, location and texture.

“We are working on various projects to determine how the brain produces actions on the world,” explains Dr. Gallivan about the work he is undertaking at the Centre for Neuroscience Studies at Queen’s. “Simply looking at an object doesn’t provide the brain with information about how much that object weighs. Take for example a suitcase. There is often nothing about its visual appearance that informs you of whether it is packed with clothes or empty. Rather, this is information that must be derived through recent interactions with that object and stored in the brain so as to guide our movements the next time we must lift and interact with that object.”

According to previous research, the ventral visual cortex supports visual processing for perception and object recognition whereas the dorsal visual cortex supports visual processing for the control of action. However, this division of labour had only been tested for visually guided actions like reaching, which are directed towards objects, and not for actions involving the manipulation of objects, which requires access to stored knowledge about object properties.

“Because information about object weight is primarily important for the control of action, we thought that this information might only be stored in motor-related areas of the brain,” says Dr. Gallivan. “Surprisingly, however, we found that this non-visual information was also stored in ventral visual cortex. Presumably this allows for the weight of an object to become easily associated with its visual properties.”

In ongoing research, Drs. Gallivan and Flanagan are using transcranial magnetic stimulation (TMS) to temporarily disrupt targeted brain areas in order to assess their contribution to skilled object manipulation. By identifying which areas of the brain control certain motor skills, Drs. Gallivan and Flanagan’s research will be helpful in assessing patients with neurological impairments including stroke.
The work was funded by the Canadian Institutes of Health Research (CIHR). The research was recently published in Current Biology.

A weighty discovery

Humans have developed sophisticated concepts like mass and gravity to explain a wide range of everyday phenomena, but scientists have remarkably little understanding of how such concepts are represented by the brain.

Using advanced neuroimaging techniques, Queen’s University researchers have revealed how the brain stores knowledge about an object’s weight – information critical to our ability to successfully grasp and interact with objects in our environment.

Jason Gallivan, a Banting postdoctoral fellow in the Department of Psychology, and Randy Flanagan, a professor in the Department of Psychology, used functional magnetic resonance imaging (fMRI) to uncover what regions of the human brain represent an object’s weight prior to lifting that object. They found that knowledge of object weight is stored in ventral visual cortex, a brain region previously thought to only represent those properties of an object that can be directly viewed such as its size, shape, location and texture.

“We are working on various projects to determine how the brain produces actions on the world,” explains Dr. Gallivan about the work he is undertaking at the Centre for Neuroscience Studies at Queen’s. “Simply looking at an object doesn’t provide the brain with information about how much that object weighs. Take for example a suitcase. There is often nothing about its visual appearance that informs you of whether it is packed with clothes or empty. Rather, this is information that must be derived through recent interactions with that object and stored in the brain so as to guide our movements the next time we must lift and interact with that object.”

According to previous research, the ventral visual cortex supports visual processing for perception and object recognition whereas the dorsal visual cortex supports visual processing for the control of action. However, this division of labour had only been tested for visually guided actions like reaching, which are directed towards objects, and not for actions involving the manipulation of objects, which requires access to stored knowledge about object properties.

“Because information about object weight is primarily important for the control of action, we thought that this information might only be stored in motor-related areas of the brain,” says Dr. Gallivan. “Surprisingly, however, we found that this non-visual information was also stored in ventral visual cortex. Presumably this allows for the weight of an object to become easily associated with its visual properties.”

In ongoing research, Drs. Gallivan and Flanagan are using transcranial magnetic stimulation (TMS) to temporarily disrupt targeted brain areas in order to assess their contribution to skilled object manipulation. By identifying which areas of the brain control certain motor skills, Drs. Gallivan and Flanagan’s research will be helpful in assessing patients with neurological impairments including stroke.

The work was funded by the Canadian Institutes of Health Research (CIHR). The research was recently published in Current Biology.

Filed under visual cortex transcranial magnetic stimulation object weight occipitotemporal cortex neuroscience science

371 notes

Brain fills gaps to produce a likely picture
Researchers at Radboud University use visual illusions to demonstrate to what extent the brain interprets visual signals. They were surprised to discover that active interpretation occurs early on in signal processing. In other words, we see not only with our eyes, but with our brain, too. Current Biology is publishing these results in the July issue.
The results obtained by the Radboud University researchers are illustrated, for example, by the visual illusion on the left: we see a triangle that in fact is not there. The triangle is only suggested because of the way the ‘Pac-Man’ shapes are positioned; there appears to be a light-grey triangle on top of three black circles.
Seen in the fMRIHow does the brain do that? That was the question Peter Kok and Floris de Lange, from the Donders Institute at Radboud University in Nijmegen, asked themselves. Using fMRI, they discovered that the triangle – although non-existent – activates the primary visual brain cortex. This is the first area in the cortex to deal with a signal from the eyes.
The primary visual brain cortex is normally regarded as the area where eye signals are merely processed, but that has now been refuted by the results Kok and De Lange obtained.
Active interpretationRecent theories assume that the brain does not simply process or filter external information, but actively interprets it. In the example described above, the brain decides it is more likely that a triangle would be on top of black circles than that three such circles, each with a bite taken out, would by coincidence point in a particular direction. After all, when we look around, we see triangles and circles more often than Pac-Man shapes.
Furthermore, objects very often lie on top of other things; just think of the books and piles of paper on your desk. The imaginary triangle is a feasible explanation for the bites taken out of the circles; the brain ‘understands’ they are ‘merely’ partly covered black circles.
The unexpected requires more processingKok and De Lange also noticed that whenever the Pac-Man shapes do not form a triangle, more brain activity is required. In the above image on the right, we see that the three Pac-Man shapes ‘underneath’ the triangle cause little brain activity (coloured blue), but the separate Pac-Man on the right causes more activity. This also fits in with the theory that perception is a question of interpretation: if something is easy to explain, less brain activity is needed to process that information, compared to when something is unexpected or difficult to account for – as in the adjacent diagram.

Brain fills gaps to produce a likely picture

Researchers at Radboud University use visual illusions to demonstrate to what extent the brain interprets visual signals. They were surprised to discover that active interpretation occurs early on in signal processing. In other words, we see not only with our eyes, but with our brain, too. Current Biology is publishing these results in the July issue.

The results obtained by the Radboud University researchers are illustrated, for example, by the visual illusion on the left: we see a triangle that in fact is not there. The triangle is only suggested because of the way the ‘Pac-Man’ shapes are positioned; there appears to be a light-grey triangle on top of three black circles.

Seen in the fMRI
How does the brain do that? That was the question Peter Kok and Floris de Lange, from the Donders Institute at Radboud University in Nijmegen, asked themselves. Using fMRI, they discovered that the triangle – although non-existent – activates the primary visual brain cortex. This is the first area in the cortex to deal with a signal from the eyes.

The primary visual brain cortex is normally regarded as the area where eye signals are merely processed, but that has now been refuted by the results Kok and De Lange obtained.

Active interpretation
Recent theories assume that the brain does not simply process or filter external information, but actively interprets it. In the example described above, the brain decides it is more likely that a triangle would be on top of black circles than that three such circles, each with a bite taken out, would by coincidence point in a particular direction. After all, when we look around, we see triangles and circles more often than Pac-Man shapes.

Furthermore, objects very often lie on top of other things; just think of the books and piles of paper on your desk. The imaginary triangle is a feasible explanation for the bites taken out of the circles; the brain ‘understands’ they are ‘merely’ partly covered black circles.

The unexpected requires more processing
Kok and De Lange also noticed that whenever the Pac-Man shapes do not form a triangle, more brain activity is required. In the above image on the right, we see that the three Pac-Man shapes ‘underneath’ the triangle cause little brain activity (coloured blue), but the separate Pac-Man on the right causes more activity. This also fits in with the theory that perception is a question of interpretation: if something is easy to explain, less brain activity is needed to process that information, compared to when something is unexpected or difficult to account for – as in the adjacent diagram.

Filed under visual illusions visual cortex brain activity neuroimaging shape perception neuroscience science

free counters