Neuroscience

Articles and news from the latest research reports.

Posts tagged visual perception

325 notes

Don’t Underestimate Your Mind’s Eye
Take a look around, and what do you see? Much more than you think you do, thanks to your finely tuned mind’s eye, which processes images without your even knowing.
A University of Arizona study has found that objects in our visual field of which we are not consciously aware still may influence our decisions. The findings refute traditional ideas about visual perception and cognition, and they could shed light on why we sometimes make decisions — stepping into a street, choosing not to merge into a traffic lane — without really knowing why.
Laura Cacciamani, who recently earned her doctorate in psychology with a minor in neuroscience, has found supporting evidence. Cacciamani’s is the lead author on a co-authored study, published online in the journal Attention, Perception and Psychophysics, shows that the brain’s subconscious processing has an impact on behavior and decision-making.
This seems to make evolutionary sense, Cacciamani said. Early humans would have required keen awareness of their surroundings on a subliminal level in order to survive.
"Your brain is always monitoring for meaning in the world, to be aware of your general surroundings and potential predators," Cacciamani said. "You can be focused on a task, but your brain is assessing the meaning of everything around you – even objects that you’re not consciously perceiving."
The study builds on the findings of earlier research by Jay Sanguinetti, who also was a doctoral candidate in the UA Department of Psychology. Both studies go against conventional wisdom among vision scientists.
"According to the traditional view, the brain accesses the meaning – or the memory – of an object after you perceive it," Cacciamani said. "Against this view, we have now shown that the meaning of an object can be accessed before conscious perception.
"We’re showing that there’s more interplay between memory and perception than previously has been assumed," she said.
Cacciamani asked participants in her study to classify nouns that appeared on a computer screen as naming a natural object or artificial object by pressing one of two buttons labeled “natural” or “artificial.” For example, the word “leaf” indicates an object found in nature, while “anchor” indicates a man-made or artificial object.
But before each word appeared on the screen, the computer flashed a black silhouette that – unknown to participants – had portions of natural or artificial objects suggested along the white outside regions (called the “ground” regions) of the image. Participants were not told to look for anything in the silhouettes, and they were flashed so quickly – 50 milliseconds – that it would have been difficult to notice the objects in the ground regions even if someone knew what to look for. Participants never were aware that the silhouette’s grounds suggested recognizable objects.
Cacciamani measured how well study participants performed at categorizing the words as natural or artificial by recording speed and accuracy.
"We found that participants performed better on the natural/artificial word task when that word followed a silhouette whose ground contained an object of the same rather than a different category," Cacciamani said.
This indicates that the brain accessed the meaning of the objects in the silhouette’s grounds even though study participants didn’t know the objects were there, she said.
"Every day our visual systems are bombarded with more information than we can consciously be aware of," Cacciamani said. "We’re showing that your brain might still be accessing information without your conscious awareness, and that could influence your behavior."

Don’t Underestimate Your Mind’s Eye

Take a look around, and what do you see? Much more than you think you do, thanks to your finely tuned mind’s eye, which processes images without your even knowing.

A University of Arizona study has found that objects in our visual field of which we are not consciously aware still may influence our decisions. The findings refute traditional ideas about visual perception and cognition, and they could shed light on why we sometimes make decisions — stepping into a street, choosing not to merge into a traffic lane — without really knowing why.

Laura Cacciamani, who recently earned her doctorate in psychology with a minor in neuroscience, has found supporting evidence. Cacciamani’s is the lead author on a co-authored study, published online in the journal Attention, Perception and Psychophysics, shows that the brain’s subconscious processing has an impact on behavior and decision-making.

This seems to make evolutionary sense, Cacciamani said. Early humans would have required keen awareness of their surroundings on a subliminal level in order to survive.

"Your brain is always monitoring for meaning in the world, to be aware of your general surroundings and potential predators," Cacciamani said. "You can be focused on a task, but your brain is assessing the meaning of everything around you – even objects that you’re not consciously perceiving."

The study builds on the findings of earlier research by Jay Sanguinetti, who also was a doctoral candidate in the UA Department of Psychology. Both studies go against conventional wisdom among vision scientists.

"According to the traditional view, the brain accesses the meaning – or the memory – of an object after you perceive it," Cacciamani said. "Against this view, we have now shown that the meaning of an object can be accessed before conscious perception.

"We’re showing that there’s more interplay between memory and perception than previously has been assumed," she said.

Cacciamani asked participants in her study to classify nouns that appeared on a computer screen as naming a natural object or artificial object by pressing one of two buttons labeled “natural” or “artificial.” For example, the word “leaf” indicates an object found in nature, while “anchor” indicates a man-made or artificial object.

But before each word appeared on the screen, the computer flashed a black silhouette that – unknown to participants – had portions of natural or artificial objects suggested along the white outside regions (called the “ground” regions) of the image. Participants were not told to look for anything in the silhouettes, and they were flashed so quickly – 50 milliseconds – that it would have been difficult to notice the objects in the ground regions even if someone knew what to look for. Participants never were aware that the silhouette’s grounds suggested recognizable objects.

Cacciamani measured how well study participants performed at categorizing the words as natural or artificial by recording speed and accuracy.

"We found that participants performed better on the natural/artificial word task when that word followed a silhouette whose ground contained an object of the same rather than a different category," Cacciamani said.

This indicates that the brain accessed the meaning of the objects in the silhouette’s grounds even though study participants didn’t know the objects were there, she said.

"Every day our visual systems are bombarded with more information than we can consciously be aware of," Cacciamani said. "We’re showing that your brain might still be accessing information without your conscious awareness, and that could influence your behavior."

Filed under visual perception decision making visual awareness object perception psychology neuroscience science

95 notes

Declining intelligence in old age linked to visual processing

Researchers have uncovered one of the basic processes that may help to explain why some people’s thinking skills decline in old age. Age-related declines in intelligence are strongly related to declines on a very simple task of visual perception speed, the researchers report in the Cell Press journal Current Biology on August 4.

The evidence comes from experiments in which researchers showed 600 healthy older people very brief flashes of one of two shapes on a screen and measured the time it took each of them to reliably tell one from the other. Participants repeated the test at ages 70, 73, and 76. The longitudinal study is among the first to test the hypothesis that the changes they observed in the measure known as “inspection time” might be related to changes in intelligence in old age.

"The results suggest that the brain’s ability to make correct decisions based on brief visual impressions limits the efficiency of more complex mental functions," says Stuart Ritchie of the University of Edinburgh. "As this basic ability declines with age, so too does intelligence. The typical person who has better-preserved complex thinking skills in older age tends to be someone who can accumulate information quickly from a fleeting glance."

Previous studies had shown that smarter people, as measured by standard IQ tests, tend to be better at discerning the difference between two briefly presented shapes, the researchers explain. But before now no one had looked to see how those two measures might change over time as people grow older. The findings were rather unexpected.

"What surprised us was the strength of the relation between the declines," Ritchie says. "Because inspection time and the intelligence tests are so very different from one another, we wouldn’t have expected their declines to be so strongly connected."

The results provide evidence that the slowing of simple, visual decision-making processes might be part of what underlies declines in the complex decision making that we recognize as general intelligence. The results might also find practical use given the simplicity of the inspection time measure, Ritchie says, noting that the test can be taken very simply on a computer and has been used with children, adults, and even patients with dementia or other medical disorders.

"Since the declines are so strongly related, it might be easier under some circumstances to use inspection time to chart a participant’s cognitive decline than it would be to sit them down and give them a full, complicated battery of IQ tests," he says.

(Source: eurekalert.org)

Filed under visual perception intelligence thinking aging cognition psychology neuroscience science

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

156 notes

Neuroscientists discover adaptation mechanisms of the brain when perceiving letters of the alphabet
The headlights – two eyes, the radiator cowling – a smiling mouth: This is how our brain sometimes creates a face out of a car front. The same happens with other objects: in house facades, trees or stones – a “human face” can often be detected as well. Prof. Dr. Gyula Kovács from Friedrich Schiller University Jena (Germany) knows the reason why. “Faces are of tremendous importance for human beings,” the neuroscientist explains. That’s why in the course of the evolution our visual perception has specialized in the recognition of faces in particular. “This sometimes even goes as far as us recognizing faces when there are none at all.”

Until now the researchers assumed that this phenomenon is an exception that can only be applied to faces. But, as Prof. Kovács and his colleague Mareike Grotheer were able to point out in a new study: these distinct adaptation mechanisms are not only restricted to the perception of faces. In the The Journal of Neuroscience the Jena researchers have proved that the effect can also occur in the perception of letters.
Read more

Neuroscientists discover adaptation mechanisms of the brain when perceiving letters of the alphabet

The headlights – two eyes, the radiator cowling – a smiling mouth: This is how our brain sometimes creates a face out of a car front. The same happens with other objects: in house facades, trees or stones – a “human face” can often be detected as well. Prof. Dr. Gyula Kovács from Friedrich Schiller University Jena (Germany) knows the reason why. “Faces are of tremendous importance for human beings,” the neuroscientist explains. That’s why in the course of the evolution our visual perception has specialized in the recognition of faces in particular. “This sometimes even goes as far as us recognizing faces when there are none at all.”

Until now the researchers assumed that this phenomenon is an exception that can only be applied to faces. But, as Prof. Kovács and his colleague Mareike Grotheer were able to point out in a new study: these distinct adaptation mechanisms are not only restricted to the perception of faces. In the The Journal of Neuroscience the Jena researchers have proved that the effect can also occur in the perception of letters.

Read more

Filed under visual perception learning brain activity repetition suppression adaptation neuroscience science

173 notes

Researchers find ‘Seeing Jesus in toast’ phenomenon perfectly normal

People who claim to see “Jesus in toast” may no longer be mocked in the future thanks to a new study by researchers at the University of Toronto and partner institutions in China.

image

Researchers have found that the phenomenon of “face pareidolia”– where onlookers report seeing images of Jesus, Virgin Mary, or Elvis in objects such as toasts, shrouds, and clouds — is normal and based on physical causes.

“Most people think you have to be mentally abnormal to see these types of images, so individuals reporting this phenomenon are often ridiculed”, says lead researcher Prof. Kang Lee of the University of Toronto’s Eric Jackman Institute of Child Study. “But our findings suggest that it’s common for people to see non-existent features because human brains are uniquely wired to recognize faces, so that even when there’s only a slight suggestion of facial features the brain automatically interprets it as a face,” said Lee.

Although this phenomenon has been known for centuries, little is understood about the underlying neural mechanisms that cause it. In the first study of its kind, researchers studied brain scans and behavioural responses to individuals seeing faces and letters in different patterns. They discovered face paredilia isn’t due to a brain anomaly or imagination but is caused by the combined work of the frontal cortex which helps generate expectations and sends signals to the posterior visual cortex to enhance the interpretation stimuli from the outside world.

Researchers also found that people can be led to see different images — such as faces or words or letters — depending on what they expect to see, which in turn activates specific parts of the brain that process such images. Seeing “Jesus in toast” reflects our brain’s normal functioning and the active role that the frontal cortex plays in visual perception. Instead of the phrase “seeing is believing” the results suggest that “believing is seeing.”

(Source: media.utoronto.ca)

Filed under face pareidolia face processing fusiform face area visual perception prefrontal cortex psychology neuroscience science

374 notes

Noisy brain signals: How the schizophrenic brain misinterprets the world
People with schizophrenia often misinterpret what they see and experience in the world. New research provides insight into the brain mechanisms that might be responsible for this misinterpretation. The study from the Montreal Neurological Institute and Hospital – The Neuro - at McGill University and McGill University Health Centre, reveals that certain errors in visual perception in people with schizophrenia are consistent with interference or ‘noise’ in a brain signal known as a corollary discharge. Corollary discharges are found throughout the animal kingdom, from bugs to fish to humans, and they are thought to be crucial for monitoring one’s own actions. The study, published in the April 2 issue of the Journal of Neuroscience, identifies a corollary discharge dysfunction in schizophrenia, which could aid with diagnosis and treatment of this difficult disorder. It was carried out in collaboration with researchers Veronica Whitford, Gillian O’Driscoll, and Debra Titone in the Department of Psychology, McGill University.
“A corollary discharge is a copy of a nervous system message that is sent to other parts of the brain, in order to make us aware that we are doing something,” said Dr. Christopher Pack, neuroscientist at The Neuro and lead investigator on the study. “For example, if we want to move our arm, the motor area of the brain sends a signal to the muscles to produce a movement. A copy of this command, which is the corollary discharge, is sent to other regions of the brain, to inform them of the impending movement. If you were moving your arm, and you didn’t have the corollary discharge signal, you might assume that someone else was moving your arm. Similarly, if you generated a thought, and you had an impaired corollary discharge, then you might assume that someone else placed the thought in your mind. Corollary discharges ensure that different areas of the brain are communicating with each other, so that we are aware that we are moving our own arm, talking, or thinking our own thoughts.”
Schizophrenia is a disorder that interferes with the ability to think clearly and to manage emotions. People with schizophrenia often attribute their own thoughts and actions to external sources, as in the case of auditory hallucinations. Other common symptoms include delusions and disorganized thinking and speech. 
Recent research has suggested that an impaired corollary discharge can account for some of these symptoms. However, the nature of the impairment was unknown. In their study, Dr. Pack and his colleagues (including Dr. Alby Richard, neurology resident at The Neuro) used a test called a perisaccadic localization task, to investigate corollary discharge activity. In this test, subjects are asked to make quick eye movements to follow a dot on a computer screen. At the same time they are also asked to localize visual stimuli that appear briefly on the screen from time to time. In order to perform this task accurately, subjects need to know where on the screen they are looking – in other words they use corollary discharges signals that arise from the brain structures that control the eye muscles.
The results showed that people with schizophrenia were less accurate in figuring out where they were looking. Consequently they made more mistakes in estimating the position of the stimuli that were flashed on the screen. “What is interesting and potentially clinically important is that the pattern of mistakes made by the patients correlated with the extent of their symptoms,” said Dr. Pack. “This is particularly interesting because the circuits that control eye movements include the best-understood structures in the brain. So we are optimistic that we can work backward from the behavioral data to the biological basis of the corollary discharge effects. We have already started to do this with computational modeling. Mathematically we can convert the corollary discharge of a healthy control into the corollary discharge of a patient with schizophrenia by adding noise and randomness. It is not that people with schizophrenia have no corollary discharge, or a corollary discharge with delayed or weaker amplitude. Rather the patients appear primarily to have a noisy corollary discharge signal. This visual test is very easy thing to do and quite sensitive to individual differences.“
The study shows that patients with schizophrenia make larger errors in localizing visual stimuli compared to controls. These results could be explained by a corollary discharge signal, which also predicts patient symptom severity, suggesting a possible basis for some of the most common symptoms of schizophrenia. This work was supported by The Natural Sciences and Engineering Research Council of Canada, The Brain & Behavior Research Foundation (NARSAD) and the EJLB Foundation.

Noisy brain signals: How the schizophrenic brain misinterprets the world

People with schizophrenia often misinterpret what they see and experience in the world. New research provides insight into the brain mechanisms that might be responsible for this misinterpretation. The study from the Montreal Neurological Institute and Hospital – The Neuro - at McGill University and McGill University Health Centre, reveals that certain errors in visual perception in people with schizophrenia are consistent with interference or ‘noise’ in a brain signal known as a corollary discharge. Corollary discharges are found throughout the animal kingdom, from bugs to fish to humans, and they are thought to be crucial for monitoring one’s own actions. The study, published in the April 2 issue of the Journal of Neuroscience, identifies a corollary discharge dysfunction in schizophrenia, which could aid with diagnosis and treatment of this difficult disorder. It was carried out in collaboration with researchers Veronica Whitford, Gillian O’Driscoll, and Debra Titone in the Department of Psychology, McGill University.

“A corollary discharge is a copy of a nervous system message that is sent to other parts of the brain, in order to make us aware that we are doing something,” said Dr. Christopher Pack, neuroscientist at The Neuro and lead investigator on the study. “For example, if we want to move our arm, the motor area of the brain sends a signal to the muscles to produce a movement. A copy of this command, which is the corollary discharge, is sent to other regions of the brain, to inform them of the impending movement. If you were moving your arm, and you didn’t have the corollary discharge signal, you might assume that someone else was moving your arm. Similarly, if you generated a thought, and you had an impaired corollary discharge, then you might assume that someone else placed the thought in your mind. Corollary discharges ensure that different areas of the brain are communicating with each other, so that we are aware that we are moving our own arm, talking, or thinking our own thoughts.”

Schizophrenia is a disorder that interferes with the ability to think clearly and to manage emotions. People with schizophrenia often attribute their own thoughts and actions to external sources, as in the case of auditory hallucinations. Other common symptoms include delusions and disorganized thinking and speech. 

Recent research has suggested that an impaired corollary discharge can account for some of these symptoms. However, the nature of the impairment was unknown. In their study, Dr. Pack and his colleagues (including Dr. Alby Richard, neurology resident at The Neuro) used a test called a perisaccadic localization task, to investigate corollary discharge activity. In this test, subjects are asked to make quick eye movements to follow a dot on a computer screen. At the same time they are also asked to localize visual stimuli that appear briefly on the screen from time to time. In order to perform this task accurately, subjects need to know where on the screen they are looking – in other words they use corollary discharges signals that arise from the brain structures that control the eye muscles.

The results showed that people with schizophrenia were less accurate in figuring out where they were looking. Consequently they made more mistakes in estimating the position of the stimuli that were flashed on the screen. “What is interesting and potentially clinically important is that the pattern of mistakes made by the patients correlated with the extent of their symptoms,” said Dr. Pack. “This is particularly interesting because the circuits that control eye movements include the best-understood structures in the brain. So we are optimistic that we can work backward from the behavioral data to the biological basis of the corollary discharge effects. We have already started to do this with computational modeling. Mathematically we can convert the corollary discharge of a healthy control into the corollary discharge of a patient with schizophrenia by adding noise and randomness. It is not that people with schizophrenia have no corollary discharge, or a corollary discharge with delayed or weaker amplitude. Rather the patients appear primarily to have a noisy corollary discharge signal. This visual test is very easy thing to do and quite sensitive to individual differences.“

The study shows that patients with schizophrenia make larger errors in localizing visual stimuli compared to controls. These results could be explained by a corollary discharge signal, which also predicts patient symptom severity, suggesting a possible basis for some of the most common symptoms of schizophrenia. This work was supported by The Natural Sciences and Engineering Research Council of Canada, The Brain & Behavior Research Foundation (NARSAD) and the EJLB Foundation.

Filed under schizophrenia corollary discharge visual perception saccades psychology neuroscience science

97 notes

Detecting Unidentified Changes
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.
Full Article

Detecting Unidentified Changes

Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.

Full Article

Filed under attention blindness visual awareness eye movements visual perception psychology neuroscience science

657 notes

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane
Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.
They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.
“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.
“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  
Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.
“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.
To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.
Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.
“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”
In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.
For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane

Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.

They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.

“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.

“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  

Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.

“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.

To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.

Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.

“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”

In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.

For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Filed under visual perception continuity field visual system perceptual serial dependence neuroscience science

161 notes

Human brains ‘hard-wired’ to link what we see with what we do

Your brain’s ability to instantly link what you see with what you do is down to a dedicated information ‘highway’, suggests new UCL-led research.

For the first time, researchers from UCL and Cambridge University have found evidence of a specialised mechanism for spatial self-awareness that combines visual cues with body motion.

Standard visual processing is prone to distractions, as it requires us to pay attention to objects of interest and filter out others. The new study has shown that our brains have separate ‘hard-wired’ systems to visually track our own bodies, even if we are not paying attention to them. In fact, the newly-discovered network triggers reactions even before the conscious brain has time to process them.

The researchers discovered the new mechanism by testing 52 healthy adults in a series of three experiments. In all experiments, participants used robotic arms to control cursors on two-dimensional displays, where cursor motion was directly linked to hand movement. Their eyes were kept fixed on a mark at the centre of the screen, confirmed with eye tracking.

In the first experiment, participants controlled two separate cursors with their left and right hands, both equally close to the centre. The goal was to guide each cursor to a corresponding target at the top of the screen. Occasionally the cursor or target on one side would jump left or right, requiring participants to take corrective action. Each jump was ‘cued’ with a flash on one side, but this was random so did not always correspond to the side about to change.

Unsurprisingly, people reacted faster to target jumps when their attention was drawn to the ‘correct’ side by the cue. However, reactions to cursor jumps were fast regardless of cuing, suggesting that a separate mechanism independent of attention is responsible for tracking our own movements.

“The first experiment showed us that we react very quickly to changes relating to objects directly under our own control, even when we are not paying attention to them,” explains Dr Alexandra Reichenbach of the UCL Institute of Cognitive Neuroscience, lead author of the study. “This provides strong evidence for a dedicated neural pathway linking motor control to visual information, independently of the standard visual systems that are dependent on attention.”

The second experiment was similar to the first, but also introduced changes in brightness to demonstrate the attention effect on the visual perception system. In the third experiment, participants had to guide one cursor to its target in the presence of up to four dummy targets and cursors, ‘distractors’, alongside the real ones. In this experiment, responses to cursor jumps were less affected by distractors than responses to target jumps. Reactions to cursor jumps remained vigorous with one or two distractors, but were significantly decreased when there were four.

“These results provide further evidence of a dedicated ‘visuomotor binding’ mechanism that is less prone to distractions than standard visual processing,’ says Dr Reichenbach. “It looks like the specialised system has a higher tolerance for distractions, but in the end it is still affected. Exactly why we evolved a separate mechanism remains to be seen, but the need to react rapidly to different visual cues about ourselves and the environment may have been enough to necessitate a specialised pathway.”

The newly-discovered system could explain why some schizophrenia patients feel like their actions are controlled by someone else.

“Schizophrenia often manifests as delusion of control, and a dysfunction in the visuomotor mechanism identified in this study might be a cause for this symptom,” explains Dr Reichenbach. “If someone does not automatically link corresponding visual cues with body motion, then they might have the feeling that they are not controlling their movements. We would need further research to confirm this, and it would be fascinating to see how schizophrenia patients perform in these experiments.”

These findings could also explain why people with even the most advanced prosthetic limbs can have trouble coordinating movements.

“People often describe their prosthetic limbs as feeling ‘other’, not a true extension of their body,’ says Dr Reichenbach. “Even on the best prosthetic hands, if the observed movement of the fingers is not exactly what you would expect, then it will not feel like you are in direct control. These small details might have a big effect on how people perceive prostheses.”

(Source: ucl.ac.uk)

Filed under visuomotor system visual perception visuospatial awareness prosthetic limbs neuroscience science

90 notes

Monkeys can point to objects they do not report seeing

Are monkeys, like humans, able to ascertain where objects are located without much more than a sideways glance? Quite likely, says Lau Andersen of the Aarhus University in Denmark, lead author of a study conducted at the Yerkes National Primate Research Center of Emory University, published in Springer’s journal Animal Cognition. The study finds that monkeys are able to localize stimuli they do not perceive.

Humans are able to locate, and even side-step, objects in their peripheral vision, sometimes before they perceive the object even being present. Andersen and colleagues therefore wanted to find out if visually guided action and visual perception also occurred independently in other primates.

The researchers trained five adult male rhesus monkeys (Macaca mulatta) to perform a short-latency, highly stereotyped localization task. Using a touchscreen computer, the animals learned to touch one of four locations where an object was briefly presented. The monkeys also learned to perform a detection task using identical stimuli, in which they had to report the presence or absence of an object by pressing one of two buttons. These techniques are similar to those used to test normal humans, and therefore make an especially direct comparison between humans and monkeys possible. A method called “visual masking” was used to systematically reduce how easily a visual target was processed.

Andersen and his colleagues found that the monkeys were still able to locate targets that they could not detect. The animals performed the tasks very accurately when the stimuli were unmasked, and their performance dropped when visual masking was employed. But monkeys could still locate targets at masking levels for which they reported that no target had been presented. While these results cannot establish the existence of phenomenal vision in monkeys, the discrepancy between visually guided action and detection parallels the dissociation of conscious and unconscious vision seen in humans.

“Knowing whether similar independent brain systems are present in humans and nonverbal species is critical to our understanding of comparative psychology and the evolution of brains,” explains Andersen.

(Source: springer.com)

Filed under visual perception primates visual masking blindsight animal cognition neuroscience science

free counters