Posts tagged visual search

Posts tagged visual search
Many of us have steeled ourselves for those ‘needle in a haystack’ tasks of finding our vehicle in an airport car park, or scouring the supermarket shelves for a favourite brand.

A new scientific study has revealed that our understanding of how the human brain prepares to perform visual search tasks of varying difficulty may now need to be revised.
When people search for a specific object, they tend to hold in mind a visual representation of it, based on key attributes like shape, size or colour. Scientists call this ‘advanced specification’. For example, we might search for a friend at a busy railway station by scanning the platform for someone who is very tall or who is wearing a green coat, or a combination of these characteristics.
Researchers from the School of Psychology at the University of Lincoln, UK, set out to better explain how these abstract visual representations are formed. They used fMRI scanners to record neural activity when volunteers prepared to search for a target object: a coloured letter amid a screen of other coloured letters.
Their findings, published in the journal ‘Brain Research’, are the first to fully isolate the different areas of the human brain involved in this ‘prepare to search’ function. Surprisingly, they show that the advanced frontal areas of the brain, usually key to advanced cognitive tasks, appear to take a backseat. Instead it is the basic back areas of the brain and the sub-cortical areas that do the work.
Dr Patrick Bourke from the University of Lincoln’s School of Psychology, who led the study, said: “Up until now, when researchers have studied visual search tasks they have also found that frontal areas of the brain were active. This has been assumed to indicate a control system: an ‘executive’ that largely resides in the advanced front of the brain which sends signals to the simpler back of the brain, activating visual memories. Here, when we isolated the ‘prepare’ part of the task from the actual search and response phase we found that this activation in the front was no longer present.”
This finding has important implications for understanding the fundamental brain processes involved. It was previously thought that the Intra-parietal region of the brain, which is linked to visual attention, was the central component of the supposed ‘front-back’ control network, relaying useful information (such as a shape or colour bias) from frontal areas of the brain to the back, where simple visual representations of the object are held. If the frontal areas are not activated in the preparation phase, this cannot be the case.
The study also showed that the pattern of brain activation varied depending on the anticipated difficulty of the search task, even when the target object was the same. This indicates that rather than holding in mind a single representation of an object, a new target is constructed each time, depending on the nature of the task.
Dr Bourke added: “While consistent with previous brain imaging work on visual search, these results change the interpretations and assumptions that have been applied previously. Notably, they highlight a difference between studies of animals’ brains and those of humans. Studies with monkeys convincingly show the front-back control system and we thought we understood how this worked. At the same time our findings are consistent with a growing body of brain imaging work in humans that also shows no frontal brain activation when short term memories are held.”
(Source: lincoln.ac.uk)
UCSB Study Shows Where Scene Context Happens in our Brain
In a remote fishing community in Venezuela, a lone fisherman sits on a cliff overlooking the southern Caribbean Sea. This man –– the lookout –– is responsible for directing his comrades on the water, who are too close to their target to detect their next catch. Using abilities honed by years of scanning the water’s surface, he can tell by shadows, ripples, and even the behavior of seabirds, where the fish are schooling, and what kind of fish they might be, without actually seeing the fish. This, in turn, changes where the boats go, and how the men fish.
Though a seemingly simple and intuitive strategy, the lookout’s visual search function –– a process that takes mere seconds for the human brain –– is still something that a computer, despite technological advances, can’t do as accurately.
"Behind what seems to be automatic is a lot of sophisticated machinery in our brain," said Miguel Eckstein, professor in UC Santa Barbara’s Department of Psychological & Brain Sciences. "A great part of our brain is dedicated to vision."
Over the millennia of human evolution, our brains developed a pattern of search based largely on environmental cues and scene context. It’s an ability that has not only helped us find food and avoid danger in humankind’s earliest days, but continues to aid us today, in tasks as banal as driving to work, or shopping; or as specialized as reading X-rays.
Where this –– the search for objects using scene and other objects –– occurs in the brain is little understood, and is for the first time discussed in the paper, “Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes,” published recently in the Journal of Neuroscience.
The researchers flashed hundreds images of indoor and outdoor scenes before observers, and instructed them to search for certain objects that were consistent with those scenes. Half of the images, however, did not contain the target object. During the trials, the subjects were asked to indicate whether the target object was present in the scene.
The researchers were particularly interested in the images that did not contain the target. Another measure was taken to determine where subjects expected specific objects to be in target-absent scenes. Invariably, the subjects would indicate similar areas: If presented with a living room scene and told to look for a clock or a painting, they would indicate the wall; if shown a photo of a bathroom and told to indicate where to expect a hand soap or toothbrush, they would indicate the sink.
The searched object’s contextual location in the scenes, according to the study, is represented in the area called the lateral occipital complex (LOC), a place that corresponds roughly to the lower back portion of the head, toward the side. This area, according to Eckstein, has the ability to account for other objects in the scene that often appear in close spatial proximity with the searched object –– something computers are only recently being taught to do.
"So, if you’re looking for a computer mouse on a cluttered desk, a machine would be looking for things shaped like a mouse. It might find it, but it might see other objects of similar shape, and classify that as a mouse," Eckstein said. Computer vision systems might also not associate their target with specific locations or other objects. So, to a machine, the floor is just as likely a place for a mouse as a desk.
The LOC, on the other hand, would contain the information the brain needs to direct a person’s attention and gaze first toward the most likely place that a mouse might be, such as on top of the desk, or near the keyboard. From there, other visual parts of the brain go to work, searching for particular characteristics, or determining the target’s presence.
So strong is the scene context in biasing search, said Eckstein, that if another similar-looking object was placed in the location where the mouse is likely to be, and that scene briefly flashed before your eyes, you would likely –– erroneously –– interpret that object as the mouse.
While scene context information has been found highly active in the LOC, other visual areas of the brain are also influenced by context to certain degrees, including the interparietal sulcus, located near the top of the head; and the retrosplenial cortex, found in the brain’s interior.
"Since contextual guidance is a critical strategy that allows humans to rapidly find objects in scenes, studying the brain areas involved in normal humans might help us to gain a better understanding of neural areas involved in those with visual search deficits, such as brain-damaged patients and the elderly," Eckstein said. "Also, a large component of becoming an expert searcher –– like radiologists or fishermen –– is exploiting contextual relationships to search. Thus, understanding the neural basis of contextual guidance might allow us to gain a better understanding about what brain areas are critical to gain search expertise."