Neuroscience

Articles and news from the latest research reports.

Posts tagged visual processing

110 notes

Patients with autism spectrum disorder are not sensitive to ‘being imitated’

A Japanese research group led by Prof Norihiro Sadato, a professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), has found that people with autism spectrum disorders (ASD) have decreased activity in an area in the brain critical for understanding if his/her movement was imitated by others. These results will be published in Neuroscience Research.

image

The research group of Norihiro Sadato, a professor of NIPS, Hirotaka Kosaka, a specially-assigned associate professor of the University of Fukui, and Toshio Munesue, a professor of Kanazawa University measured brain activity by functional magnetic resonance imaging (fMRI) when one’s movement was imitated by others. The group studied brain activity when a subject saw his/her finger movement imitated or not imitated by others. Normal subjects have increased activity in the extrastriate body area (EBA) when they are imitated compared to when they are not being imitated. The EBA is a region in the visual cortex for visual processing that responds powerfully during the perception of human body parts. On the other hand, because this kind of activity in the EBA of subjects with ASD was not observed, it shows that the EBA of subjects with ASD is not working properly when imitated.

Persons with ASD are known to have difficulty in interpersonal communication and have trouble noticing that their movement was imitated. Behavioral intervention research to alleviate ASD is proceeding and indicates that training utilizing imitation is useful. The result of the above research not only provided clues to ASD, but also can be used in the evaluation of behavioral intervention to alleviate the disorder.

(Source: eurekalert.org)

Filed under autism extrastriate body area brain activity neuroimaging visual processing neuroscience science

171 notes

How the brain processes visual information
MSU’s Behrad Noudoost was a co-author with Marc Zirnsak and other neuroscientists from the Tirin Moore Lab at Stanford University in publishing a recent paper on the research in Nature, an international weekly journal for natural sciences.
Noudoost and the team studied saccadic eye movements—those movements where the eye jumps from one point of focus to another—in an effort to determine exactly how this happens without us being overcome by our brains processing too much visual information.
To introduce the study, Noudoost first gets his audience to think about eye movements at the most basic level. “Look in the mirror and stare at one eye,” Noudoost said. “Then look at the other eye. We are essentially blind during eye movement as we cannot see our eyes move, even though we know they did.”
According to Noudoost, scientists have been trying to learn exactly how the brain processes these visual stimuli during saccadic eye movement and this research offers new evidence that the prefrontal cortex of the brain is responsible for visual stability.
"Visual stability is what keeps our vision stable in spite of changing input. It is similar to the stabilizer button on a video camera," Noudoost said.
"We wanted to know what causes the brain to filter out un-necessary information when we shift our vision from one focal target to another," Noudoost said. "Without that filter the visual information would overwhelm us."
According to the scientists, the study offers evidence neurons in the prefrontal cortex of the brain start processing information in anticipation of where we are going to look before we ever do it, suggesting that selective processing might be the mechanism for visual stability.
Noudoost said this new information can help scientists better understand the underlying causes of problems such as dyslexia and attention deficit disorders.
According to Frances Lefcort, the head of the Department of Cell Biology and Neuroscience, the team’s basic research may have implications for understanding a myriad of mental health issues.
"Schizophrenia and attention deficit disorders have been linked to visual stability, so the work Behrad is doing offers valuable knowledge to other scientists working in cognitive neuroscience," Lefcort said.
"Understanding how a healthy brain works is important in terms of knowing its impact on cognitive functions such as memory, learning and in this case attention," Noudoost said. "By exploring normal brain function, we can better understand what happens in someone with a mental illness."
According to Lefcort, Noudoost and neuroscience professor Charles Gray are strengthening MSU’s contribution to the field of cognitive neuroscience.
"Behrad is an exquisitely trained neuroscientist. He offers students a viewpoint as both scientist and a physician," Lefcort said. "We are thrilled to have him and he has already brought new energy and is bolstering our impact on the growing field of brain research."
Noudoost joined MSU’s Department of Cell Biology and Neuroscience last summer from Stanford University and has already been awarded a $225,000 Whitehall Foundation grant for neuroscience. Whitehall Foundation grants are awarded to established scientists working in neurobiology.
"I am colorblind and I wanted to see the world as others could see it," Noudoost said explaining why he was first drawn into this type of research. "Although I still don’t see the world in the same colors as everyone else, I am more amazed everyday by the brain."

How the brain processes visual information

MSU’s Behrad Noudoost was a co-author with Marc Zirnsak and other neuroscientists from the Tirin Moore Lab at Stanford University in publishing a recent paper on the research in Nature, an international weekly journal for natural sciences.

Noudoost and the team studied saccadic eye movements—those movements where the eye jumps from one point of focus to another—in an effort to determine exactly how this happens without us being overcome by our brains processing too much visual information.

To introduce the study, Noudoost first gets his audience to think about eye movements at the most basic level. “Look in the mirror and stare at one eye,” Noudoost said. “Then look at the other eye. We are essentially blind during eye movement as we cannot see our eyes move, even though we know they did.”

According to Noudoost, scientists have been trying to learn exactly how the brain processes these visual stimuli during saccadic eye movement and this research offers new evidence that the prefrontal cortex of the brain is responsible for visual stability.

"Visual stability is what keeps our vision stable in spite of changing input. It is similar to the stabilizer button on a video camera," Noudoost said.

"We wanted to know what causes the brain to filter out un-necessary information when we shift our vision from one focal target to another," Noudoost said. "Without that filter the visual information would overwhelm us."

According to the scientists, the study offers evidence neurons in the prefrontal cortex of the brain start processing information in anticipation of where we are going to look before we ever do it, suggesting that selective processing might be the mechanism for visual stability.

Noudoost said this new information can help scientists better understand the underlying causes of problems such as dyslexia and attention deficit disorders.

According to Frances Lefcort, the head of the Department of Cell Biology and Neuroscience, the team’s basic research may have implications for understanding a myriad of mental health issues.

"Schizophrenia and attention deficit disorders have been linked to visual stability, so the work Behrad is doing offers valuable knowledge to other scientists working in cognitive neuroscience," Lefcort said.

"Understanding how a healthy brain works is important in terms of knowing its impact on cognitive functions such as memory, learning and in this case attention," Noudoost said. "By exploring normal brain function, we can better understand what happens in someone with a mental illness."

According to Lefcort, Noudoost and neuroscience professor Charles Gray are strengthening MSU’s contribution to the field of cognitive neuroscience.

"Behrad is an exquisitely trained neuroscientist. He offers students a viewpoint as both scientist and a physician," Lefcort said. "We are thrilled to have him and he has already brought new energy and is bolstering our impact on the growing field of brain research."

Noudoost joined MSU’s Department of Cell Biology and Neuroscience last summer from Stanford University and has already been awarded a $225,000 Whitehall Foundation grant for neuroscience. Whitehall Foundation grants are awarded to established scientists working in neurobiology.

"I am colorblind and I wanted to see the world as others could see it," Noudoost said explaining why he was first drawn into this type of research. "Although I still don’t see the world in the same colors as everyone else, I am more amazed everyday by the brain."

Filed under eye movements prefrontal cortex visual processing visual system mental illness neuroscience science

156 notes

Sound and vision: visual cortex processes auditory information too

‘Seeing is believing’, so the idiom goes, but new research suggests vision also involves a bit of hearing.

image

Scientists studying brain process involved in sight have found the visual cortex also uses information gleaned from the ears as well as the eyes when viewing the world.

They suggest this auditory input enables the visual system to predict incoming information and could confer a survival advantage.

Professor Lars Muckli, of the Institute of Neuroscience and Psychology at the University of Glasgow, who led the research, said: “Sounds create visual imagery, mental images, and automatic projections.

“So, for example, if you are in a street and you hear the sound of an approaching motorbike, you expect to see a motorbike coming around the corner. If it turned out to be a horse, you’d be very surprised.”

The study, published in the journal Current Biology, involved conducting five different experiments using functional Magnetic Resonance Imaging (fMRI) to examine the activity in the early visual cortex in 10 volunteer subjects.

In one experiment they asked the blindfolded volunteers to listen to three different sounds – birdsong, traffic noise and a talking crowd.

Using a special algorithm that can identify unique patterns in brain activity, the researchers were able to discriminate between the different sounds being processed in early visual cortex activity.

A second experiment revealed even imagined images, in the absence of both sight and sound, evoked activity in the early visual cortex.

Lars Muckli said: “This research enhances our basic understanding of how interconnected different regions of the brain are. The early visual cortex hasn’t previously been known to process auditory information, and while there is some anatomical evidence of interconnectedness in monkeys, our study is the first to clearly show a relationship in humans.

“In future we will test how this auditory information supports visual processing, but the assumption is it provides predictions to help the visual system to focus on surprising events which would confer a survival advantage.

“This might provide insights into mental health conditions such as schizophrenia or autism and help us understand how sensory perceptions differ in these individuals.”

(Source: gla.ac.uk)

Filed under visual cortex hearing vision auditory perception visual processing neuroscience science

122 notes

Alpha waves organize a to-do list for the brain

In his search to understand the role and function of brain waves, neuroscientist Ole Jensen (Radboud University) postulates a new theory on how the alpha wave controls attention to visual signals. His theory is published in Trends in Neurosciences on May 20. Alpha waves appear to be even more active and important than Jensen already thought.

image

Our brain cells ‘spark’ all the time. From this electronic activity brain waves emerge: oscillations at different band widths. And like a radio station uses different frequencies to carry specific information far away from the emitting source, so does the brain. And just like radio listeners with a certain musical preference tune in to the frequency that carries the music they prefer, brain area’s tune into the wave length relevant for their functioning.

Alpha waves aren’t boring
Ole Jensen, professor of Neuronal Oscillations at Radboud University’s Donders Institute for Brain, Cognition and Behaviour, tries to figure out how this network of sending and receiving information through oscillations works in detail. Earlier he discovered a novel role of the alpha wave that was long thought to be a boring wave, emerging when the brain runs idle and a person is dozing off. Jensen shifted this interpretation by showing the importance of the alpha frequency: it helps to shut down irrelevant brain area’s for a certain task. It helps us concentrate on what is really important at that moment.

To do list
In the Trends in Neurosciences paper that appeared today, Jensen postulates a new theory for how this actually works given a visual task. ‘We think that different phases of the alpha wave encode for different parts of a visual scene. It helps breaking down the visual information into small jobs and then perform those tasks in a specific order. A to do list for your visual attention system: focus on the face, focus on the hand, focus on the glass, look around. And then all over again.’

Jensen is now planning to test this new interpretation of the alpha wave in both animals and humans.

(Source: ru.nl)

Filed under brainwaves alpha oscillations visual attention visual processing neuroscience science

108 notes

Revealing Rembrandt
The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI). Our results emphasized the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.
Full Article

Revealing Rembrandt

The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI). Our results emphasized the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.

Full Article

Filed under brain activity neuroimaging art occipital cortex visual processing psychology neuroscience science

127 notes

New study reveals insight into how the brain processes shape and color
A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.
Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.
Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.
"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.
"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."
They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”
"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."
As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”
The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

New study reveals insight into how the brain processes shape and color

A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.

Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.

Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.

"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.

"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."

They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”

"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."

As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”

The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

Filed under inferior temporal cortex visual processing object recognition neuroimaging neuroscience science

131 notes

Dragonflies can see by switching “on” and “off”
Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.
Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.
Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.
"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.
"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.
"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.
The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.
"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.
"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.
"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Dragonflies can see by switching “on” and “off”

Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.

Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.

Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.

"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.

"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.

"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.

The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.

"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.

"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.

"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Filed under visual processing vision neural circuitry robotics neuroscience science

71 notes

A little brain training goes a long way
People who use a ‘brain-workout’ program for just 10 hours have a mental edge over their peers even a year later, researchers report today in PLoS ONE.
The search for a regimen of mental callisthenics to stave off age-related cognitive decline is a booming area of research — and a multimillion-dollar business. But critics argue that even though such computer programs can improve performance on specific mental tasks, there is scant proof that they have broader cognitive benefits.
For the study, adults aged 50 and older played a computer game designed to boost the speed at which players process visual stimuli. Processing speed is thought to be “the first domino that falls in cognitive decline”, says Fredric Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who led the research.
The game was developed by academic researchers but is now sold under the name Double Decision by Posit Science, based in San Francisco, California. (Posit did not fund the study.) Players are timed on how fast they click on an image in the centre of the screen and on others that appear around the periphery. The program ratchets up the difficulty as a player’s performance improves.
Participants played the training game for 10 hours on site, some with an extra 4-hour ‘booster’ session later, or for 10 hours at home. A control group worked on computerized crossword puzzles for 10 hours on site. Researchers measured the mental agility of all 621 subjects before the brain training began, and again one year later, using eight well-established tests of cognitive performance.
The control group’s scores did not increase over the course of that year, but all the brain-training groups significantly upped their scores in the Useful Field of View test — which requires a subject to identify items in a scene with just a quick glance — and four others. When they compared the study participants’ scores to those expected for people their ages, the researchers found improvements that translated to 3-4.1 years of protection in age-related decline for the field-of-view test and from 1.5-6.6 years for the other tasks.
“It was interesting that it didn’t matter whether you were on site at the clinic or just did this at home — you got basically the same bang for your buck,” says Frederick Unverzagt, a neuropsychologist at the Indiana University School of Medicine in Indianapolis, who was not involved with the study.
But Peter Snyder, a neuropsychologist at Brown University in Providence, Rhode Island, points out that players’ performance could have improved simply because they were familiar with the game — not because their cognitive skills improved. “To me, that makes it hard to interpret the results with the same degree of certainty” that the authors have, he says.
Snyder also doubts that 10 hours of training could affect brain wiring enough to provide long-lasting general benefits, but Henry Mahncke, chief executive of Posit Science, disagrees. “If you’ve never played piano before and spend 10 hours practising, a year later you will be better than when you started,” he says. “The new study shows that there’s science to be done here. Some things you can do with your brain are highly productive and others are not.”

A little brain training goes a long way

People who use a ‘brain-workout’ program for just 10 hours have a mental edge over their peers even a year later, researchers report today in PLoS ONE.

The search for a regimen of mental callisthenics to stave off age-related cognitive decline is a booming area of research — and a multimillion-dollar business. But critics argue that even though such computer programs can improve performance on specific mental tasks, there is scant proof that they have broader cognitive benefits.

For the study, adults aged 50 and older played a computer game designed to boost the speed at which players process visual stimuli. Processing speed is thought to be “the first domino that falls in cognitive decline”, says Fredric Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who led the research.

The game was developed by academic researchers but is now sold under the name Double Decision by Posit Science, based in San Francisco, California. (Posit did not fund the study.) Players are timed on how fast they click on an image in the centre of the screen and on others that appear around the periphery. The program ratchets up the difficulty as a player’s performance improves.

Participants played the training game for 10 hours on site, some with an extra 4-hour ‘booster’ session later, or for 10 hours at home. A control group worked on computerized crossword puzzles for 10 hours on site. Researchers measured the mental agility of all 621 subjects before the brain training began, and again one year later, using eight well-established tests of cognitive performance.

The control group’s scores did not increase over the course of that year, but all the brain-training groups significantly upped their scores in the Useful Field of View test — which requires a subject to identify items in a scene with just a quick glance — and four others. When they compared the study participants’ scores to those expected for people their ages, the researchers found improvements that translated to 3-4.1 years of protection in age-related decline for the field-of-view test and from 1.5-6.6 years for the other tasks.

“It was interesting that it didn’t matter whether you were on site at the clinic or just did this at home — you got basically the same bang for your buck,” says Frederick Unverzagt, a neuropsychologist at the Indiana University School of Medicine in Indianapolis, who was not involved with the study.

But Peter Snyder, a neuropsychologist at Brown University in Providence, Rhode Island, points out that players’ performance could have improved simply because they were familiar with the game — not because their cognitive skills improved. “To me, that makes it hard to interpret the results with the same degree of certainty” that the authors have, he says.

Snyder also doubts that 10 hours of training could affect brain wiring enough to provide long-lasting general benefits, but Henry Mahncke, chief executive of Posit Science, disagrees. “If you’ve never played piano before and spend 10 hours practising, a year later you will be better than when you started,” he says. “The new study shows that there’s science to be done here. Some things you can do with your brain are highly productive and others are not.”

Filed under cognitive training aging cognitive decline visual processing performance psychology neuroscience science

free counters