Posts tagged visual attention

Posts tagged visual attention
Dyslexia, the most commonly diagnosed learning disability in the United States, is a neurological reading disability that occurs when the regions of the brain that process written language don’t function normally.

The use of non-invasive functional neuroimaging tools has helped characterize how brain activity is disrupted in dyslexia. However, most prior work has focused on only a small number of brain regions, leaving a gap in our understanding of how multiple brain regions communicate with one another through networks, called functional connectivity, in persons with dyslexia.
This led neuroscience PhD student Emily Finn and her colleagues at the Yale University School of Medicine to conduct a whole-brain functional connectivity analysis of dyslexia using functional magnetic resonance imaging (fMRI). They report their findings in the current issue of Biological Psychiatry.
"In this study, we compared fMRI scans from a large number of both children and young adults with dyslexia to scans of typical readers in the same age groups. Rather than activity in isolated brain regions, we looked at functional connectivity, or coordinated fluctuations between pairs of brain regions over time," explained Finn.
In total, they recruited and scanned 75 children and 104 adults. Finn and her colleagues then compared the whole-brain connectivity profiles of the dyslexic readers to the non-impaired readers, which revealed widespread differences.
Dyslexic readers showed decreased connectivity within the visual pathway as well as between visual and prefrontal regions, increased right-hemisphere connectivity, reduced connectivity in the visual word-form area, and persistent connectivity to anterior language regions around the inferior frontal gyrus. This altered connectivity profile is consistent with dyslexia-related reading difficulties.
Dr. John Krystal, Editor of Biological Psychiatry, said, “This study elegantly illustrates the value of functional imaging to map circuits underlying problems with cognition and perception, in this case, dyslexia.”
"As far as we know, this is one of the first studies of dyslexia to examine differences in functional connectivity across the whole brain, shedding light on the brain networks that crucially support the complex task of reading," added Finn. "Compared to typical readers, dyslexic readers had weaker connections between areas that process visual information and areas that control attention, suggesting that individuals with dyslexia are less able to focus on printed words."
Additionally, young-adult dyslexic readers maintained high connectivity to brain regions involved in phonology, suggesting that they continue to rely on effortful “sounding out” strategies into adulthood rather than transitioning to more automatic, visual-based strategies for word recognition.
A better understanding of brain organization in dyslexia could potentially lead to better interventions to help struggling readers.
(Source: elsevier.com)
Research to be presented at the Annual Meeting of the Society for the Study of Ingestive Behavior (SSIB), the foremost society for research into all aspects of eating and drinking behavior, describes a way that brain chemistry may make some people notice food more easily, which can tempt overeating even in people who are not overweight. Dopamine activity in the striatum, an area of the brain sensitive to food reward, was linked to how quickly men noticed a food picture hidden among neutral pictures. In turn, the men who quickly noticed food pictures also ate more.
From rodent research it is clear that dopamine action in the striatum motivates eating, and this goes awry in obesity. “We do know that in human obesity the striatal dopamine system is affected, but interesting enough we know little about the striatal dopamine system of young, healthy individuals and how it relates to the motivation to eat” says Susanne la Fleur from the Academic Medical Center in Amsterdam, who directed the study linking dopamine, attention to food, and eating.
Ordinarily the burst of dopamine during a rewarding activity is eventually stopped when it is re-absorbed into the cells it came from. That re-uptake process requires a brain chemical called “dopamine transporter” (DAT). Lower DAT means dopamine is reabsorbed more slowly, causing it to keep acting on the brain. The researchers scanned brains of healthy, non-obese young men to determine available DAT. The men completed a computerized visual attention task to see how quickly they could detect food pictures among neutral pictures. Subjects were also asked to report food intake during 7 days.
The researchers found that the men with lower DAT, which means higher dopamine activity, showed a stronger visual attention bias towards food, detecting food pictures more quickly. “We could speculate that in healthy humans dopamine does motivate eating, however although we did observe a correlation between striatal dopamine transporter binding and the visual attention bias for food; and between visual attention bias for food and actual food intake, we did not observe a correlation between striatal dopamine transporter binding and actual food intake. Thus, a factor in addition to dopamine must be involved in going from being motivated to actual eating”, la Fleur concluded.
(Source: eurekalert.org)
In his search to understand the role and function of brain waves, neuroscientist Ole Jensen (Radboud University) postulates a new theory on how the alpha wave controls attention to visual signals. His theory is published in Trends in Neurosciences on May 20. Alpha waves appear to be even more active and important than Jensen already thought.

Our brain cells ‘spark’ all the time. From this electronic activity brain waves emerge: oscillations at different band widths. And like a radio station uses different frequencies to carry specific information far away from the emitting source, so does the brain. And just like radio listeners with a certain musical preference tune in to the frequency that carries the music they prefer, brain area’s tune into the wave length relevant for their functioning.
Alpha waves aren’t boring
Ole Jensen, professor of Neuronal Oscillations at Radboud University’s Donders Institute for Brain, Cognition and Behaviour, tries to figure out how this network of sending and receiving information through oscillations works in detail. Earlier he discovered a novel role of the alpha wave that was long thought to be a boring wave, emerging when the brain runs idle and a person is dozing off. Jensen shifted this interpretation by showing the importance of the alpha frequency: it helps to shut down irrelevant brain area’s for a certain task. It helps us concentrate on what is really important at that moment.
To do list
In the Trends in Neurosciences paper that appeared today, Jensen postulates a new theory for how this actually works given a visual task. ‘We think that different phases of the alpha wave encode for different parts of a visual scene. It helps breaking down the visual information into small jobs and then perform those tasks in a specific order. A to do list for your visual attention system: focus on the face, focus on the hand, focus on the glass, look around. And then all over again.’
Jensen is now planning to test this new interpretation of the alpha wave in both animals and humans.
(Source: ru.nl)
Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior
Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.
Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.
The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.
"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."
The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.
"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."
Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.
"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."
New work at the University of California, Davis, shows for the first time how visual attention affects activity in specific brain cells. The paper, published June 26 in the journal Nature, shows that attention increases the efficiency of signaling into the brain’s cerebral cortex and boosts the ratio of signal over noise.

It’s the first time neuroscientists have been able to look at the behavior of synaptic circuits at such a fine-grained level of resolution while measuring the effects of attention, said Professor Ron Mangun, dean of social sciences at UC Davis and a researcher at the UC Davis Center for Mind and Brain.
Our brains recreate an internal map of the world we see through our eyes, mapping our visual field onto specific brain cells. Humans and our primate relatives have the ability to pay attention to objects in the visual scene without looking at them directly, Mangun said.
"Essentially, we ‘see out of the corner of our eyes,’ as the old saying goes. This ability helps us detect threats, and react quickly to avoid them, as when a car running a red light at high speed is approach from our side," he said.
Postdoctoral scholar Farran Briggs worked with Mangun and Professor Martin Usrey at the UC Davis Center for Neuroscience to measure signaling through single nerve connections, or synapses, in monkeys while they performed a standard cognitive test for attention: pressing a joystick in response to seeing a stimulus appear in their field of view.
By taking measurements on each side of a synapse leading into the cerebral cortex, the team could measure when neurons were firing, the strength of the signal and the signal-to-noise ratio.
The researchers found that when the animals were paying attention to an area within their field of view, the signal strength through corresponding synapses leading into the cortex became more effective, and the signal was boosted relative to background noise.
Combining established cognitive psychology with advanced neuroscience, the technique opens up new possibilities for research.
"There are a lot of questions about attention that we can now investigate, such as which brain mechanisms are disordered in diseases that affect attention," Usrey said.
The method could be used, for example, to probe the cholinergic nervous system, which is impacted by Alzheimer’s disease. It could also help to better understand developmental disorders that involve defects in attention, such as attention deficit hyperactivity disorder and autism.
"It’s going to turn out to be important for understanding and treating all kinds of diseases," Mangun predicted.
(Source: news.ucdavis.edu)

Action video games boost reading skills
Much to the chagrin of parents who think their kids should spend less time playing video games and more time studying, time spent playing action video games can actually make dyslexic children read better. In fact, 12 hours of video game play did more for reading skills than is normally achieved with a year of spontaneous reading development or demanding traditional reading treatments.
The evidence, appearing in the Cell Press journal Current Biology on February 28, follows from earlier work by the same team linking dyslexia to early problems with visual attention rather than language skills.
"Action video games enhance many aspects of visual attention, mainly improving the extraction of information from the environment," said Andrea Facoetti of the University of Padua and the Scientific Institute Medea of Bosisio Parini in Italy. "Dyslexic children learned to orient and focus their attention more efficiently to extract the relevant information of a written word more rapidly."
The findings come as further support for the notion that visual attention deficits are at the root of dyslexia, a condition that makes reading extremely difficult for one out of every ten children, Facoetti added. He emphasized that there is, as of now, no approved treatment for dyslexia that includes video games.
Facoetti’s team, including Sandro Franceschini, Simone Gori, Milena Ruffino, Simona Viola, and Massimo Molteni, tested the reading, phonological, and attentional skills of two groups of children with dyslexia before and after they played action or non-action video games for nine 80-minute sessions. The action video gamers were able to read faster without losing accuracy. They also showed gains in other tests of attention.
"These results are very important in order to understand the brain mechanisms underlying dyslexia, but they don’t put us in a position to recommend playing video games without any control or supervision," Facoetti said.
Still, there is great hope for early interventions that could be applied in low-resource settings. “Our study paves the way for new remediation programs, based on scientific results, that can reduce the dyslexia symptoms and even prevent dyslexia when applied to children at risk for dyslexia before they learn to read.”
And, guess what? Those kids will also be having fun.

Eye movements reveal impaired reading in schizophrenia
A study of eye movements in schizophrenia patients provides new evidence of impaired reading fluency in individuals with the mental illness.
The findings, by researchers at McGill University in Montreal, could open avenues to earlier detection and intervention for people with the illness.
While schizophrenia patients are known to have abnormalities in language and in eye movements, until recently reading ability was believed to be unaffected. That is because most previous studies examined reading in schizophrenia using single-word reading tests, the McGill researchers conclude. Such tests aren’t sensitive to problems in reading fluency, which is affected by the context in which words appear and by eye movements that shift attention from one word to the next.
The McGill study, led by Ph.D. candidate Veronica Whitford and psychology professors Debra Titone and Gillian A. O’Driscoll, monitored how people move their eyes as they read simple sentences. The results, which were first published online last year, appear in the February issue of the Journal of Experimental Psychology: General.
Eye movement measures provide clear and objective indicators of how hard people are working as they read. For example, when struggling with a difficult sentence, people generally make smaller eye movements, spend more time looking at each word, and spend more time re-reading words. They also have more difficulty attending to upcoming words, so they plan their eye movements less efficiently.
The McGill study, which involved 20 schizophrenia outpatients and 16 non-psychiatric participants, showed that reading patterns in people with schizophrenia differed in several important ways from healthy participants matched for gender, age, and family social status. People with schizophrenia read more slowly, generated smaller eye movements, spent more time processing individual words, and spent more time re-reading. In addition, people with schizophrenia were less efficient at processing upcoming words to facilitate reading.
The researchers evaluated factors that could contribute to the problems in reading fluency among the schizophrenia outpatients – specifically, their ability to parse words into sound components and their ability to skillfully control eye movements in non-reading contexts. Both factors were found to contribute to the reading deficits.

Yale researchers spot attention deficits in babies who later develop autism
Researchers at Yale School of Medicine are able to detect deficits in social attention in infants as young as six months of age who later develop Autism Spectrum Disorders (ASD). Published in the current issue of Biological Psychiatry, the results showed that these infants paid less attention to people and their activities than typically developing babies.
Katarzyna Chawarska, associate professor at the Yale Child Study Center, and her colleagues investigated whether six-month-old infants later diagnosed with ASD showed prodromal symptoms — early signs of ASD such as an impaired ability to attend to social overtures and activities of others. Before this study, it had not been clear whether these prodromal symptoms were present in the first year of life.
“This study highlights the possibility of identifying certain features linked to visual attention that can be used for pinpointing infants at greatest risk for ASD in the first year of life,” said Chawarska. “This could make earlier interventions and treatments possible.”
How we manage to attend to multiple objects without being distracted by irrelevant information

The “tiki-taka”-style of the Spanish national football team is amazing to watch: Xavi passes to Andrès Iniesta, he just rebounds the ball once and it’s right at Xabi Alonso’s foot. The Spanish midfielders cross the field as if they run on rails, always maintaining attention on the ball and the teammates, the opponents chasing after them without a chance. An international team of scientists from the German Primate Center and McGill University in Canada, including Stefan Treue, head of the Cognitive Neuroscience Laboratory, has now uncovered how the human brain makes such excellence possible by dividing visual attention: The brain is capable of splitting its ‘attentional spotlight’ for an enhanced processing of multiple visual objects. (Neuron, doi: 10.1016/j.neuron.2011.10.013)
When we pay attention to an object, neurons responsible for this location in our field of view are more active then when they process unattended objects. But quite often we want to pay attention to multiple objects in different spatial positions, with interspersed irrelevant objects. Different theories have been proposed to account for this ability. One is, that the attentive focus is split spatially, excluding objects between the attentional spotlights. Another possibility is, that the attentional focus is zoomed out to cover all relevant objects, but including the interspersed irrelevant ones. A third possibility would be a single focus rapidly switching between the attended objects.
Studying rhesus macaques
In order to explain how such a complex ability is achieved, the neuroscientists measured the activity of individual neurons in areas of the brain involved in vision. They studied two rhesus macaques, which were trained in a visual attention task. The monkeys had learned to pay attention to two relevant objects on a screen, with an irrelevant object between them. The experiment showed, that the macaques’ neurons responded strongly to the two attended objects with only a weak response to the irrelevant stimulus in the middle. So the brain is able to spatially split visual attention and ignore the areas in between. “Our results show the enormous adaptiveness of the brain, which enables us to deal effectively with many different situations.
This multi-tasking allows us to simultaneously attend multiple objects”, Stefan Treue says. Such a powerful ability of our attentive system is one precondition for humans to become perfect football-artists but also to safely navigate in everyday traffic.
(Source: alphagalileo.org)