Posts tagged visual perception

Posts tagged visual perception
Your Brain Sees Things You Don’t
University of Arizona doctoral degree candidate Jay Sanguinetti has authored a new study, published online in the journal Psychological Science, that indicates that the brain processes and understands visual input that we may never consciously perceive.
The finding challenges currently accepted models about how the brain processes visual information.
A doctoral candidate in the UA’s Department of Psychology in the College of Science, Sanguinetti showed study participants a series of black silhouettes, some of which contained meaningful, real-world objects hidden in the white spaces on the outsides.
Saguinetti worked with his adviser Mary Peterson, a professor of psychology and director of the UA’s Cognitive Science Program, and with John Allen, a UA Distinguished Professor of psychology, cognitive science and neuroscience, to monitor subjects’ brainwaves with an electroencephalogram, or EEG, while they viewed the objects.
"We were asking the question of whether the brain was processing the meaning of the objects that are on the outside of these silhouettes," Sanguinetti said. "The specific question was, ‘Does the brain process those hidden shapes to the level of meaning, even when the subject doesn’t consciously see them?"
The answer, Sanguinetti’s data indicates, is yes.
Study participants’ brainwaves indicated that even if a person never consciously recognized the shapes on the outside of the image, their brains still processed those shapes to the level of understanding their meaning.
"There’s a brain signature for meaningful processing," Sanguinetti said. A peak in the averaged brainwaves called N400 indicates that the brain has recognized an object and associated it with a particular meaning.
"It happens about 400 milliseconds after the image is shown, less than a half a second," said Peterson. "As one looks at brainwaves, they’re undulating above a baseline axis and below that axis. The negative ones below the axis are called N and positive ones above the axis are called P, so N400 means it’s a negative waveform that happens approximately 400 milliseconds after the image is shown."
The presence of the N400 peak indicates that subjects’ brains recognize the meaning of the shapes on the outside of the figure.
"The participants in our experiments don’t see those shapes on the outside; nonetheless, the brain signature tells us that they have processed the meaning of those shapes," said Peterson. "But the brain rejects them as interpretations, and if it rejects the shapes from conscious perception, then you won’t have any awareness of them."
"We also have novel silhouettes as experimental controls," Sanguinetti said. "These are novel black shapes in the middle and nothing meaningful on the outside."
The N400 waveform does not appear on the EEG of subjects when they are seeing truly novel silhouettes, without images of any real-world objects, indicating that the brain does not recognize a meaningful object in the image.
"This is huge," Peterson said. "We have neural evidence that the brain is processing the shape and its meaning of the hidden images in the silhouettes we showed to participants in our study."
The finding leads to the question of why the brain would process the meaning of a shape when a person is ultimately not going to perceive it, Sanguinetti said.
"The traditional opinion in vision research is that this would be wasteful in terms of resources," he explained. "If you’re not going to ultimately see the object on the outside why would the brain waste all these processing resources and process that image up to the level of meaning?"
"Many, many theorists assume that because it takes a lot of energy for brain processing, that the brain is only going to spend time processing what you’re ultimately going to perceive," added Peterson. "But in fact the brain is deciding what you’re going to perceive, and it’s processing all of the information and then it’s determining what’s the best interpretation."
"This is a window into what the brain is doing all the time," Peterson said. "It’s always sifting through a variety of possibilities and finding the best interpretation for what’s out there. And the best interpretation may vary with the situation."
Our brains may have evolved to sift through the barrage of visual input in our eyes and identify those things that are most important for us to consciously perceive, such as a threat or resources such as food, Peterson suggested.
In the future, Peterson and Sanguinetti plan to look for the specific regions in the brain where the processing of meaning occurs.
"We’re trying to look at exactly what brain regions are involved," said Peterson. "The EEG tells us this processing is happening and it tells us when it’s happening, but it doesn’t tell us where it’s occurring in the brain."
"We want to look inside the brain to understand where and how this meaning is processed," said Peterson.
Images were shown to Sanguinetti’s study participants for only 170 milliseconds, yet their brains were able to complete the complex processes necessary to interpret the meaning of the hidden objects.
"There are a lot of processes that happen in the brain to help us interpret all the complexity that hits our eyeballs," Sanguinetti said. "The brain is able to process and interpret this information very quickly."
Sanguinetti’s study indicates that in our everyday life, as we walk down the street, for example, our brains may recognize many meaningful objects in the visual scene, but ultimately we are aware of only a handful of those objects.
The brain is working to provide us with the best, most useful possible interpretation of the visual world, Sanguinetti said, an interpretation that does not necessarily include all the information in the visual input.

The Visual Brain Colors Black and White Images
The perception and processing of color has fascinated neuroscientists for a long time, as our brain influences our perception of it to such a degree that colors could be called an illusion. One mystery was: What happens in the brain when we look at black-and-white photographs? Do our brains fill in the colors?
Neuroscientists Michael Bannert and Andreas Bartels of the Bernstein Center and the Werner Reichardt Centre for Integrative Neuroscience in Tübingen addressed these questions. In their work, published in the leading scientific journal Current Biology, they showed study participants black-and-white photos of bananas, broccoli, strawberries, and of other objects associated with a typical color (yellow, red and green in the examples above). While doing so, they recorded their subjects’ brain activity using functional imaging. The true purpose of the study was unknown to the subjects, and to distract their attention they were shown slowly rotating objects and told to report the direction in which they were moving.
After recording brain responses to the black and white objects, the scientists presented real colors to their subjects, in the shape of yellow, green, red and blue rings. This allowed them to record the activity of the brain as it responded to different, real colors.
It turned out that the mere sight of black-and-white photos automatically elicited brain activity patterns that specifically encoded colors. These activity patterns corresponded to those that were elicited when the observers viewed real color stimuli. These patterns encoded the typical color of the respective object seen, even though it was presented in black and white. The typical colors of the presented objects could therefore be determined from the brain’s activity, even though they were shown without color.
“It was particularly interesting that the colors of the objects were only encoded in the primary visual cortex,” says Michael Bannert. The primary visual cortex is one of the first places a visual signal arrives in the brain. Scientists had assumed it simply passed on information about the physical properties of things seen, but was not able to recognize objects or to store color knowledge associated with objects. “This result shows that higher-level prior knowledge – in this case of object-colors – is projected onto the earliest stages of visual processing,” according to Andreas Bartels.
This study represents a significant contribution to answering the question of how prior knowledge contributes to perception on a neuronal basis. The projection of prior knowledge onto the earliest processing stages of the visual brain may facilitate the recognition of objects in difficult and noisy environments, such as in fog, and be relevant for colors in changing light conditions over the course of the day, when the weather is overcast, when we are indoors and so on. On the other hand, if prior knowledge or expectations have too much influence on early visual processing stages, this may account for hallucinations and the pathological perception of illusions.
An innovative series of experiments could help to unlock the mysteries of how the brain makes sense of the hustle and bustle of human activity we see around us every day.

Very little is known about the psychological processes which enable us to pick out a potential mugger from a busy street or to spot an old friend approaching us across a crowded room. Such judgements of social intention, which we make countless times each day, enable us to respond in appropriate ways to the dynamic and complex world around us.
George Mather, Professor of Vision Science at the University of Lincoln, UK, and one of the world’s foremost experts on human visual perception, will lead a new research project investigating the mechanisms behind this crucial ability to perceive and interpret the intentions of other people from the way they move.
Numerous experiments have explored the way we use visual signals to extract meaning from our environment, but most have been based on static images, such as photos of different facial expressions.
Other studies into the perception of moving images have relied on very simple animated scenes, like moving patterns of regularly-spaced lines or random dots, devoid of the richness and nuances of scenes from the ‘real world’.
There remains limited scientific understanding of how the human visual system makes sense of the flurry of movement we see around us in modern societies: for example, whether a person approaching us is sprinting or strolling, whether that means they are angry or calm, and how we should react in response.
Professor Mather aims to bridge this gap in the academic literature through a series of world-first experiments. He has been awarded a grant of £287,000 by the UK’s Economic & Social Research Council (ESRC) for a three-year study. The aim is to shed new light on the process by which the human visual system identifies and decodes ‘dynamic cues of social intention’.
Professor Mather said: “It’s true that actions speak louder than words. Perception of movement is fundamental to many of our everyday social interactions. But simply judging speed is in itself a very complex task. When you see somebody walking across your field of view, how do you know how fast they are going? That information can be very useful because it might tell you something about their intentions but it’s surprisingly difficult to make an accurate judgement. A basic problem is that the further away a moving object is, the slower it moves in the image received by the eye. We don’t really understand at the moment how the human visual system is able to compensate for different viewing conditions.”
Motion perception has been a consistent theme of Professor Mather’s research career. In previous studies he has shown that the brain can deduce socially meaningful information from very simple depictions of human movement, such as collections of dots denoting the major joints of the body.
The research in this latest project will answer fundamental questions about how the brain combines ‘low-level’ information about image motion with ‘high level’ knowledge of the social world to make meaningful assessments of the speed and nature of human movements.
(Source: lincoln.ac.uk)
Vision and Hearing Work Together in the Brain to Help Us Catch a Moving Target
A new study has found that chasing down a moving object is not only a matter of sight or of sound, but of mind.
The study found that people who are blindfolded employ the same strategy to intercept a running ball carrier as people who can see, which suggests that multiple areas of the brain cooperate to accomplish the task.
Regardless of whether they could see or not, the study participants seemed to aim ahead of the ball carrier’s trajectory and then run to the spot where they expected him or her to be in the near future. Researchers call this a “constant target-heading angle” strategy, similar to strategies used by dogs catching Frisbees and baseball players catching fly balls.
It’s also the best way to catch an object that is trying to evade capture, explained Dennis Shaffer, assistant professor of psychology at The Ohio State University at Mansfield.
“The constant-angle strategy geometrically guarantees that you’ll reach your target, if your speed and the target’s speed stay constant, and you’re both moving in a straight line. It also gives you leeway to adjust if the target abruptly changes direction to evade you,” Shaffer said.
“The fact that people run after targets at a constant angle regardless of whether they can see or not suggests that there are brain mechanisms in place that we would call ‘polymodal’—areas of the brain that serve more than one form of sensory modality. Sight and hearing may be different senses, but within the brain the results of the sensory input for this task may be the same.”
The study appears in the journal Psychonomic Bulletin and Review.
Nine people participated in the study—mainly students at Ohio State and Arizona State University, where the study took place. Some had experience playing football, either at a high school or collegiate intramural level, while others had limited or no experience with football.
The nine of them donned motion-capture equipment and took turns in pairs, one running a football across a 20-meter field (nearly 22 yards), and one chasing. They randomly assigned participants to sighted and blindfolded conditions. In the blindfolded condition, participants wore a sleep mask and the runner carried a foam football with a beeping device inside, so that the chaser had a chance to locate them by sound. The runners ran in the general direction of the chasers at different angles, and sometimes the runner would cut right or left halfway through the run.
The study was designed so that the pursuer wouldn’t have time to consciously think about how to catch the runner.
“We were just focused on trying to touch the runner as soon as possible and before they exited the field,” Shaffer said. “The idea was to have the strategy emerge by instinct.”
About 97 percent of the time, the person doing the chasing used the constant-angle strategy—even when they were blindfolded and only able to hear the beeping football.
The results were surprising, even to Shaffer.
“I knew that this seemed to be a universal strategy across species, but I expected that people’s strategies would vary more when they were blindfolded, just because we aren’t used to running around blindfolded. I didn’t expect that the blindfolded strategies would so closely match the sighted ones.”
The findings suggest that there’s some common area in the brain that processes sight and sound together when we’re chasing something.
There is another strategy for catching moving targets. Researchers call it the pursuit or aiming strategy, because it involves speeding directly at the target’s current location. It’s how apex predators such as sharks catch prey.
“As long as you are much faster than your prey, the pursuit strategy is great. You just overtake them,” Shaffer said.
In a situation where the competition is more equal, the constant-angle strategy works better—the pursuer doesn’t have to be faster than the target, and if the target switches direction, the pursuer has time to adjust.
The study builds on Shaffer’s previous work with how collegiate-level football players chase ball carriers. He’s also studied how people catch baseballs and dogs catch Frisbees. All appear to use strategies similar to the constant target-heading angle strategy, which suggests that a common neural mechanism could be at work.
(Source: researchnews.osu.edu)
People can plan strategic movements to several different targets at the same time, even when they see far fewer targets than are actually present, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.

A team of researchers at the Brain and Mind Institute at the University of Western Ontario took advantage of a pictorial illusion — known as the “connectedness illusion” — that causes people to underestimate the number of targets they see.
When people act on these targets, however, they can rapidly plan accurate and strategic reaches that reflect the actual number of targets.
Using sophisticated statistical techniques to analyze participants’ responses to multiple potential targets, the researchers found that participants’ reaches to the targets were unaffected by the presence of the connecting lines.
Thus, the “connectedness illusion” seemed to influence the number of targets they perceived but did not impact their ability to plan actions related to the targets.
These findings indicate that the processes in the brain that plan visually guided actions are distinct from those that allow us to perceive the world.
“The design of the experiments allowed us to separate these two processes, even though they normally unfold at the same time,” explained lead researcher Jennifer Milne, a PhD student at the University of Western Ontario.
“It’s as though we have a semi-autonomous robot in our brain that plans and executes actions on our behalf with only the broadest of instructions from us!”
According to Mel Goodale, professor at the University of Western Ontario and senior author on the paper, these findings “not only reveal just how sophisticated the visuomotor systems in the brain are, but could also have important implications for the design and implementation of robotic systems and efficient human-machine interfaces.”
"I’ve been in a crowded elevator with mirrors all around, and a woman will move and I’ll go to get out the way and then realise: ‘oh that woman is me’."
Heather Sellers has prosopagnosia, more commonly known as face blindness. “I can’t remember any image of the human face. It’s simply not special to me,” she says. “I don’t process them like I do a car or a dog. It’s not a visual problem, it’s a perception problem.”

Heather knew from a young age that something was different about the way she navigated her world, but her condition wasn’t diagnosed until she was in her 30s. “I always knew something was wrong – it was impossible for me to trust my perceptions of the world. I was diagnosed as anxious. My parents thought I was crazy.”
The condition is estimated to affect around 2.5 per cent of the population, and it’s common for those who have it not to realise that anything is wrong. “In many ways it’s a subtle disorder,” says Heather. “It’s easy for your brain to compensate because there are so many other things you can use to identify a person: hair colour, gait or certain clothes. But meet that person out of context and it’s socially devastating.”
As a child, she was once separated from her mum at a grocery store. Store staff reunited the pair, but it was confusing for Heather, since she didn’t initially recognise her mother. “But I didn’t know that I wasn’t recognising her.”
Chaos explained
Heather was 36 when she stumbled across the phrase face blindness in a psychology textbook. “When I saw those two words I knew instantly that was exactly what I had – that explained all the chaos.”
She found her way to Harvard neuroscientist Brad Duchaine who diagnosed her as having one of the three worst cases of the disorder that he had ever seen.
So what’s it like to not recognise anyone you know? Heather says the biggest difficulty with the disorder is recognising people who she is close to – the people that are most important to recognise. In the school where she teaches English she is fine, because she recognises people by their clothes or hair and asks her students to wear name badges.
But it can be harder in social settings. Once she went up to the wrong person at a party and put her arm around him thinking he was her partner. And at college men would phone her angry that she had walked straight past them after they had had a date. “At the time I was thinking ‘I didn’t see you, why is everyone making my life so difficult?’”
It’s not just other people Heather doesn’t recognise – she can’t identify her own face either. “A few times I have been in a crowded elevator with mirrors all around and a woman will move, and I will go to get out the way and then realise ‘oh that woman is me’.” She also finds it unsettling to see photos and not recognise herself in them.
Face processing
To try and understand the condition, Duchaine and his colleagues recorded brain activity while 12 people with prosopagnosia looked at famous and non-famous faces. The team found that part of the brain responsible for stored visual memory was activated in six people when they saw the famous faces.
But another component of brain activity thought to represent a later stage of face processing wasn’t triggered. “Some part of their brain was recognising the face,” says Duchaine, but the brain was failing to pass this information into higher-level consciousness (Brain).
"There may be training where we give people feedback and say ‘look you recognise that face even though you’re not aware of it’," says Duchaine.
Now Zaira Cattaneo at the University of Milano-Bicocca in Italy and colleagues have identified the specific brain areas that allow us to recognise our friends. The team used transcranial magnetic stimulation to block two vital aspects of face processing in people without prosopagnosia. Targeting the left prefrontal cortex blocked the ability to distinguish individual features like the nose and eyes, and blocking the right prefrontal cortex impaired the ability to distinguish the location of those features from one another (NeuroImage).
"We made performance worse," says Cattaneo. "We want to make it better." Now the team are trying to activate these areas of the brain. "The aim is to enhance face recognition abilities by directly modulating excitability in the prefrontal cortices," says Cattaneo.
Would Heather want a cure, should one be found? “I can’t imagine what you see when you see a face, and it’s scary,” she says. “I go back and forth on what I’d do. I’ve done so much work in figuring out how to chart my world, I’d need to do a whole new rewrite. But it would be fascinating.”
Reward linked to image is enough to activate brain’s visual cortex
Once rhesus monkeys learn to associate a picture with a reward, the reward by itself becomes enough to alter the activity in the monkeys’ visual cortex. This finding was made by neurophysiologists Wim Vanduffel and John Arsenault (KU Leuven and Harvard Medical School) and American colleagues using functional brain scans and was published recently in the leading journal Neuron.
Our visual perception is not determined solely by retinal activity. Other factors also influence the processing of visual signals in the brain. “Selective attention is one such factor,” says Professor Wim Vanduffel. “The more attention you pay to a stimulus, the better your visual perception is and the more effective your visual cortex is at processing that stimulus. Another factor is the reward value of a stimulus: when a visual signal becomes associated with a reward, it affects our processing of that visual signal. In this study, we wanted to investigate how a reward influences activity in the visual cortex.”
Pavlov inverted
To do this, the researchers used a variant of Pavlov’s well-known conditioning experiment: “Think of Pavlov giving a dog a treat after ringing a bell. The bell is the stimulus and the food is the reward. Eventually the dogs learned to associate the bell with the food and salivated at the sound of the bell alone. Essentially, Pavlov removed the reward but kept the stimulus. In this study, we removed the stimulus but kept the reward.”
In the study, the rhesus monkeys first encountered images projected on a screen followed by a juice reward (classical conditioning). Later, the monkeys received juice rewards while viewing a blank screen. fMRI brain scans taken during this experiment showed that the visual cortex of the monkeys was activated by being rewarded in the absence of any image.
Importantly, these activations were not spread throughout the whole visual system but were instead confined to the specific brain regions responsible for processing the exact stimulus used earlier during conditioning. This result shows that information about rewards is being sent to the visual cortex to indicate which stimuli have been associated with rewards.
Equally surprising, these reward-only trials were found to strengthen the cue-reward associations. This is more or less the equivalent to giving Pavlov’s dog an extra treat after a conditioning session and noticing the next day that he salivates twice as much as before. More generally, this result suggests that rewards can be associated with stimuli over longer time scales than previously thought.
Dopamine
Why does the visual cortex react selectively in the absence of a visual stimulus on the retina? One potential explanation is dopamine. “Dopamine is a signalling chemical (neurotransmitter) in nerve cells and plays an important role in processing rewards, motivation, and motor functions. Dopamine’s role in reward signalling is the reason some Parkinson’s patients fall into gambling addiction after taking dopamine-increasing drugs. Aware of dopamine’s role in reward, we re-ran our experiments after giving the monkeys a small dose of a drug that blocks dopamine signalling. We found that the activations in the visual cortex were reduced by the dopamine blocker. What’s likely happening here is that a reward signal is being sent to the visual cortex via dopamine,” says Professor Vanduffel.
The study used fMRI (functional Magnetic Resonance Imaging) scans to visualise brain activity. fMRI scans map functional activity in the brain by detecting changes in blood flow. The oxygen content and the amount of blood in a given brain area vary according to the brain activity associated with a given task. In this way, task-specific activity can be tracked.
When most of us admire a piece of art, it triggers a cascade of complex neural activity; a wash of emotion and meaning that fills our brains and prompts deep thought. But does that happen for people with neurological conditions, too?
Forthcoming Oxford-based exhibition Affecting Perception seeks to explore that very question, through a combination of art, seminars and school workshops. Organised by Martha Crawford, Cosima Gretton and Rachel Stratton, who together form the AXNS collective, the aim is to understand how artists and their work are affected by neurological conditions.
The team is working with the University’s Department of Experimental Psychology and artists who suffer from conditions ranging from dementia to brain damage, in order to help the public understand how art and neuroscience are intertwined. “We’re trying to engage the community with the kind of learning usually kept in the University,” explains Martha Crawford.
Helping them achieve that are Prof. Glyn Humphreys and Prof. Charles Spence, both from the University’s Department of Experimental Psychology. Individually, they’ll be leading seminars during the exhibition which explore the overlap between academia and art. “There’s a coarse level of understanding of neuropsychology outside of academia, which means people are sometimes scared of neurological conditions,” explains Professor Glyn Humphreys. “I think anything we can do to raise awareness has to be a good thing.”
During the course of the four-week exhibition, Prof. Humprheys will talk about visual agnosia: a condition where patients can’t associate visual stimulus with meaning. It’s a rare condition, but it’s of interest to artists and scientists alike. Separating meaning and aesthetic is a trick used by artists to explore the two more thoughtfully; Humphreys’ patients still have little choice but to face the world that way.
Elsewhere, Prof. Spence will talk about subtle forms of synesthesia, called cross-modal correspondences, which affect us all. Synesthesia is that odd condition where stimulating one sense leads to automatic experiences in a second; cross-modal correspondences are more subtle, like the way red stars make many of us think of bitter flavours. Plenty of famous creatives have used the phenomenon to great effect — and during his talk, Spence will explain how it can help amplify our enjoyment of art.
There’s no denying that these are weighty subject indeed. But by understanding them just a little better we can achieve a better grasp on the neurological conditions that many suffer — and break down the stigma attached to them, too.
Affecting Perception runs from 4th-31st March 2013 at venues across Oxford. Admission is free. For more information, visit http://axnscollective.org.
More Than Just Looking – A Role of Tiny Eye Movements Explained
Tübingen researcher learns how the brain keeps an eye on the periphery even when focusing on one object.
Have you ever wondered whether it’s possible to look at two places at once? Because our eyes have a specialized central region with high visual acuity and good color vision, we must always focus on one spot at a time in order to see our environment. As a result, our eyes constantly jump back and forth as we look around.
But what if – when you are looking at an object – your brain also allowed you to “look” somewhere else at the same time, out of the corner of your eye, as it were? Now, a scientist at the Werner Reichardt Centre for Integrative Neuroscience (CIN), which is funded by the German Excellence initiative at Tübingen University, has found a possible explanation for how this might happen.
Ziad Hafed, the leader of the Physiology of Active Vision Junior Research Group at CIN, wondered about the role of a type of tiny microscopic eye movement that occurs when we fix our gaze on something, called a microsaccade. “Microsaccades are sort of enigmatic,” Hafed says. They are movements of the eye which occur at exactly the moment when we are trying to look at something steadily – i.e., when we are trying to prevent our eyes from moving.
It was long thought that microsaccades were nothing but random, inconsequential tics, but Hafed wondered whether the mere unconscious preparation to generate these tiny eye movements can alter visual perception and effectively allow you to “see” out of the corner of your eye. He found that before generating a microsaccade, the brain reorganizes its visual processing to alter how you perceive things. “Imagine that you are the coach of a football team,” Hafed says. “You would normally ask your defenders to spread out across the field in order to provide good coverage during match play. However, in preparation for an upcoming corner kick by your opposing team, you would reorganize your defenders, assigning two of them to become temporary goalkeepers and protect the goal. What I found was evidence for a similar strategy in the visual brain before microsaccades,” says Hafed. That is, in preparation for generating a tiny microscopic eye movement, the brain – the “coach” – causes a subtle reorganization of the visual system, and thus alters how you might see out of the corner of your eyes (see diagram).
Using a series of experiments on human participants, coupled with computational modeling of the human visual system, Hafed asked participants to fix their attention on a spot that appeared on a screen in front of them, while he carefully measured their tiny microscopic eye movements. Hafed then probed the participants’ ability to look at two places at once by testing their peripheral vision. He found that in preparation to generate a tiny microsaccade, the participants demonstrated remarkable changes in their ability to process visual inputs. In the periphery, tiny microscopic eye movements effectively improved the capacity to direct visual input – from around where gaze is fixed – towards the brain. Hafed’s results, which are described in the leading science journal Neuron, thus demonstrate an important functional role for these tiny, microscopic, and “enigmatic” movements of the eye in helping us to perceive our environment.
Hafed’s results not only help us understand a previously puzzling phenomenon; there are also potentially wide-ranging applications arising from this work. In particular, this work can affect how we design computer and machine user interfaces. For example, using knowledge about the whole range of eye movements we constantly make, including microscopic ones, our future “smart user interfaces” can ensure that things likely to attract our attention are not displayed in places where they can be distracting. Conversely, if we need to locate something that should attract our attention – a warning light in a control room, for instance – this same approach will also be useful. As Hafed put it, “eye movements would essentially be a window on our minds.”
New research sheds light on how the brain encodes objects with multiple features, a fundamental task for the perceptual system. The study, published in Psychological Science, a journal of the Association for Psychological Science, suggests that we have limited ability to perceive mixed color-shape associations among objects that exist in several locations.
Research suggests that neurons that encode a certain feature — shape or color, for example — fire in synchrony with neurons that encode other features of the same object. Psychological scientists Liat Goldfarb of the University of Haifa and Anne Treisman of Princeton University hypothesized that if this neural-synchrony explanation were true, then synchrony would be impossible in situations in which the same features are paired differently in different objects.
Say, for example, a person sees a string of letters, “XOOX,” and the letters are printed in alternating colors, red and green. Both letter shape and letter color need to be encoded, but the associations between letter shape and letter color are mixed (i.e., the first X is red, while the second X is green), which should make neural synchrony impossible.
“The perceptual system can either know how many Xs there are or how many reds there are, but it cannot know both at the same time,” Goldfarb and Treisman explain.
The researchers investigated their hypothesis in two experiments, in which they presented participants with strings of green and red Xs and Os and asked them to compare the number of Xs with the number of red letters (i.e., more Xs, more reds, or the same).
Participants’ responses to unique color-shape associations were significantly faster and more accurate than were their responses to displays with mixed color-shape associations.
The results show that relevant color and shape dimensions could be synchronized when the pairings between color and shape were unique, but not when the pairings were mixed.
These findings demonstrate a new behavioral principle that governs object representation. When shapes are repeated in several locations and have mixed color-shape associations, they are hard to perceive.
This research expands on Anne Treisman’s groundbreaking research on feature integration in visual perception, which shows that humans can encode characteristics such as color, form, and orientation, even in the absence of spatial attention.
Treisman is one of 12 scientists who received the National Medal of Science at the White House on February 1, 2013. The National Medal of Science, along with the National Medal of Technology and Innovation, is the highest honor that the US government grants to scientists, engineers, and inventors.