Posts tagged eye movements

Posts tagged eye movements
Using a simple study of eye movements, Johns Hopkins scientists report evidence that people who are less patient tend to move their eyes with greater speed. The findings, the researchers say, suggest that the weight people give to the passage of time may be a trait consistently used throughout their brains, affecting the speed with which they make movements, as well as the way they make certain decisions.

Caption: Despite claims to the contrary, the eyes of the Mona Lisa do not make saccades. Credit: Leonardo da Vinci
In a summary of the research to be published Jan. 21 in The Journal of Neuroscience, the investigators note that a better understanding of how the human brain evaluates time when making decisions might also shed light on why malfunctions in certain areas of the brain make decision-making harder for those with neurological disorders like schizophrenia, or for those who have experienced brain injuries.
Principal investigator Reza Shadmehr, Ph.D., professor of biomedical engineering and neuroscience at The Johns Hopkins University, and his team set out to understand why some people are willing to wait and others aren’t. “When I go to the pharmacy and see a long line, how do I decide how long I’m willing to stand there?” he asks. “Are those who walk away and never enter the line also the ones who tend to talk fast and walk fast, perhaps because of the way they value time in relation to rewards?”
To address the question, the Shadmehr team used very simple eye movements, known as saccades, to stand in for other bodily movements. Saccades are the motions that our eyes make as we focus on one thing and then another. “They are probably the fastest movements of the body,” says Shadmehr. “They occur in just milliseconds.” Human saccades are fastest when we are teenagers and slow down as we age, he adds.
In earlier work, using a mathematical theory, Shadmehr and colleagues had shown that, in principle, the speed at which people move could be a reflection of the way the brain calculates the passage of time to reduce the value of a reward. In the current study, the team wanted to test the idea that differences in how subjects moved were a reflection of differences in how they evaluated time and reward.
For the study, the team first asked healthy volunteers to look at a screen upon which dots would appear one at a time –– first on one side of the screen, then on the other, then back again. A camera recorded their saccades as they looked from one dot to the other. The researchers found a lot of variability in saccade speed among individuals but very little variation within individuals, even when tested at different times and on different days. Shadmehr and his team concluded that saccade speed appears to be an attribute that varies from person to person. “Some people simply make fast saccades,” he says.
To determine whether saccade speed correlated with decision-making and impulsivity, the volunteers were told to watch the screen again. This time, they were given visual commands to look to the right or to the left. When they responded incorrectly, a buzzer sounded.
After becoming accustomed to that part of the test, they were forewarned that during the following round of testing, if they followed the command right away, they would be wrong 25 percent of the time. In those instances, after an undetermined amount of time, the first command would be replaced by a second command to look in the opposite direction.
To pinpoint exactly how long each volunteer was willing to wait to improve his or her accuracy on that phase of the test, the researchers modified the length of time between the two commands based on a volunteer’s previous decision. For example, if a volunteer chose to wait until the second command, the researchers increased the time they had to wait each consecutive time until they determined the maximum time the volunteer was willing to wait — only 1.5 seconds for the most patient volunteer. If a volunteer chose to act immediately, the researchers decreased the wait time to find the minimum time the volunteer was willing to wait to improve his or her accuracy.
When the speed of the volunteers’ saccades was compared to their impulsivity during the patience test, there was a strong correlation. “It seems that people who make quick movements, at least eye movements, tend to be less willing to wait,” says Shadmehr. “Our hypothesis is that there may be a fundamental link between the way the nervous system evaluates time and reward in controlling movements and in making decisions. After all, the decision to move is motivated by a desire to improve one’s situation, which is a strong motivating factor in more complex decision-making, too.”
(Source: eurekalert.org)

Dogs recognize familiar faces from images
So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.
Main focus on spontaneous behavior of dogs
Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?
“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.
Dogs preferred faces of familiar conspecifics
The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.
Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.
In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.
This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.
Visual representations improved by reducing noise
Neuroscientist Suresh Krishna from the German Primate Center (DPZ) in cooperation with Annegret Falkner and Michael Goldberg at Columbia University, New York has revealed how the activity of neurons in an important area of the rhesus macaque’s brain becomes less variable when they represent important visual information during an eye movement task. This reduction in variability can improve the perceptual strength of attended or relevant aspects in a visual scene, and is enhanced when the animals are more motivated to perform the task.
Humans may see the same object again and again, but their brain response will be different each time, a phenomenon called neuronal noise. The same is true for rhesus macaques, which have a visual system very similar to that of humans. This variability often limits our ability to see a dim object or hear a faint sound. On the other hand, we benefit from variable responses as they are considered an essential part of the exploration stage of learning and for generating unpredictability during competitive interactions.
Despite this importance, brain variability is poorly understood. Neuroscientists Suresh Krishna of the DPZ and his colleagues Annegret Falkner and Michael Goldberg at Columbia University in New York examined the responses of neurons in the monkey brain’s lateral intraparietal area (LIP) while the monkey planned eye movements to spots of light at different locations on a computer screen. LIP is an area in the brain that is crucial for visual attention and for actively exploring visual scenes. To measure the activity of single LIP neurons, the scientists inserted electrodes thinner than a human hair into the monkey’s brain and recorded the neurons’ electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal.
Suresh Krishna and his colleagues could show how the activity of LIP neurons becomes less variable when the macaque performs a task and plans an eye movement. The reduction in variability was particularly strong where the monkey was planning to look and when the monkey was highly motivated to perform the task. This creation of a valley of reduced variability centered on relevant and interesting aspects of a visual scene may help the brain to filter the most important aspects from the sensory information delivered by the eye. The scientists developed a simple mathematical model that captures the patterns in the data and may also be a useful framework for the analysis of other brain areas.
"Our study represents one of the most detailed descriptions of neuronal variability in the brain. It offers important insights into fascinating brain functions as diverse as the focusing of visual attention and the control of eye movements during active viewing of visual scenes. The brain’s valley of variability that we discovered may help humans and animals to interact with their complex environment.", Suresh Krishna comments on the findings.
To predict, perchance to update: Neural responses to the unexpected
Among the brain’s many functions is the use of predictive models to processing expected stimuli or actions. In such a model, we experience surprise when presented with an unexpected stimulus – that is, one which the model evaluates as having a low probability of occurrence. Interestingly, there can be two distinct – but often experimentally correlated – responses to a surprising event: reallocating additional neural resources to reprogram actions, and updating the predictive model to account for the new environmental stimulus. Recently, scientists at Oxford University used brain imaging to identify separate brain systems involved in reprogramming and updating, and created a mathematical and neuroanatomical model of how brains adjust to environmental change, Moreover, the researchers conclude that their model may also inform models of neurological disorders, such as extinction, Balint syndrome and neglect, in which this adaptive response to surprise fails.
Research Fellow Jill X. O’Reilly discussed the research she and her colleagues conducted with Medical Xpress. “Sometimes we think of the brain as an input-output device which takes sensory information, processes it, and produces actions appropriately – but in fact, brains don’t passively ‘sit around’ waiting for sensory input,” O’Reilly explains. “Rather, they actively predict what is going to happen next, because by being prepared, they can process stimuli more efficiently.”
O’Reilly cites an important example of predictive processing, which the researchers used in their study: the control of eye movements. “You can actually only process quite a small portion of visual space accurately at any one time, which is why people tend to actively look at interesting objects,” O’Reilly tells Medical Xpress. “Parts of the brain that control eye movements – for example, the parietal cortex – are actively involved in trying to predict where visual objects that are worth looking at will occur next, in order to respond to them quickly and effectively.” Since the scientists were interested in how the brain forms predictions – such as where eye movements should be directed – they designed an experiment in which people’s expectations about where they should make eye movements were built up over time and then suddenly changed. (They did this moving the stimuli participants’ were instructed to fixate on to a different part of the computer screen.)
"However," notes O’Reilly, "we know from previous work that activity in many brain areas is evoked when people are expecting to make an eye movement to one place, and actually they have to make an eye movement to another. A lot of this brain activity has to do with reprogramming the eye movement itself, rather than learning about the changed environment. That means we needed to design an experiment in which re-planning of eye movements was sometimes accompanied by learning, and sometimes not." The researchers accomplished this by color-coding stimuli: participants knew that colorful stimuli indicated a real change in the environment, while grey stimuli were to be ignored.
To quantify how much participants learned on each trial of the experiment, the team constructed a computer participant that learned about the environment in the same way the real, human participants did. Because they could determine exactly what the computer participant knew or believed about the environment – that is, where it would need to look – on each trial, we could get mathematical measures of how surprising it found each stimulus (defined as how far the stimulus location was from where the computer participant expected it to be) and how much it learned on each trial.
Therefore, the computer participant allowed the scientists to separately measure the degree to which human participants had to respond to surprise in terms of reprogramming eye movements, and how much they learned on each trial. “We then needed to work out whether some parts of the brain were specifically involved in each of these processes,” O’Reilly continues. “To do this we used fMRI and looked for areas that increased their activity in proportion to how much the computer participant, and thereby the real participants, would need to reprogram their eye movements for each surprising stimulus – as well as the extent to which they’d have to update their predictions about future stimulus locations – on each trial.”
O’Reilly stresses that the computer participant was critical to addressing the challenges they encountered. “We had access to a complete model of what participants could know or should believe about where stimuli were expected to appear on each trial. That meant we could make very specific predictions about how much they should be surprised by certain stimuli and how much they learned from each stimulus.” The team checked these predictions by looking at behavioral measures like reaction time (participants were slower to move their eyes to surprising stimuli) and gaze dwell time (participants looked at stimuli for longer when the stimuli carried information about the possible locations of future stimuli).
O’Reilly describes how their study may inform understanding of neurological disorders in which this adjustment process fails by observing that a second saccade-sensitive region in the inferior posterior parietal cortex was activated by surprise and modulated by updating. “Some stroke victims are unable to move their eyes in order to look at stimuli that show up in their visual periphery, which turns out to be similar to the process of reprogramming to surprising stimuli in our model. In contrast,” she continues, “people with brain lesions in a slightly different brain region are able to move their eyes to look at stimuli, but seem unable to learn that stimuli could occur in some parts of space – usually towards the left of the body – even if given lots of hints and training.” Because the brain regions damaged in these two patient groups map onto the regions of parietal cortex active in the experiment’s reprogramming and updating conditions, the researchers think these two processes could be differentially affected in the two patient groups.
Moving forward, the researchers would like to test their paradigm in patients who have had strokes that damaged the different brain regions activated in their study. “We’d expect to find a difference between patients with damage in different parts of parietal cortex, such that one group might be slower to reprogram eye movements to all surprising stimuli whether these stimuli are informative about future stimulus locations or not,” O’Reilly concludes, “whereas the other group might have trouble learning that the location where stimuli are going to appear has changed.”
The brain doesn’t require simultaneous visual and audio stimulation to locate the source of a sound

As ventriloquists have long known, your eyes can sometimes tell your brain where a sound is coming from more convincingly than your ears can.
A series of experiments in humans and monkeys by Duke University researchers has found that the brain does not require simultaneous visual and audio stimulation to locate the source of a sound. Rather, visual feedback obtained from trying to find a sound with the eyes had a stronger effect than visual stimuli presented at the same time as the audio, according to the Duke study.
The findings could help those with mild hearing loss learn to localize voices better, improving their ability to communicate in noisy environments, said Jennifer Groh, a professor of psychology and neuroscience at Duke.
Locating where a sound is coming from is partially learned with the aid of vision. Researchers sought to learn more about how the brain locates the source of a sound when the source is unclear and there are a number of possible visual matches.
"Our study is related to ventriloquism, in which the visual image of a puppet’s mouth ‘captures’ the sound of the puppeteer’s voice," Groh said. "It is thought that one reason this illusion occurs is because vision normally teaches the brain how to tell where sounds are coming from. We investigated how the brain knows which visual stimulus should capture the location of a sound, such as why it is the puppet’s mouth and not some other visual stimulus."
The study, which appears Thursday (Aug. 29) in the journal PLOS ONE, tested two competing hypotheses. In one, the brain determines the location of a sound based on the simultaneous occurrence of audio and its visual source. In the other, the brain uses a “guess and check” method. In this scenario, visual feedback sent to the brain after the eye focuses on a sound affects how the eye searches for that sound in the future, possibly through the brain’s reward-related circuitry.
In both paradigms, the visual stimulus — an LED — was displaced from the sound. Groh’s team then looked for evidence that the LED caused a persistent mislocation of the sound.
"Surprisingly, we found that visual feedback exerts the more powerful effect on altering localization of sounds," Groh said. "This suggests that the active behavior of looking at the puppet during a ventriloquism performance plays a role in causing the shift in where you hear the voice."
Participants in the study — 11 humans and two rhesus monkeys — shifted their sight to a sound under different visual and audio scenarios.
In one scenario, called the “synchrony-only” task, a visual stimulus appeared at the same time as a sound but too briefly to provide feedback after an eye movement to that sound.
In another, the “feedback-only” task, the visual stimulus appeared during the execution of an eye movement to a sound, but was never on at the same time as the sound.
The study found that the “feedback-only task” exerted a much more powerful effect on the estimation of sound location, as measured with eye tracking, than did the other scenario. This suggests that those who have difficulty localizing sounds may benefit from practice involving eye movements.
On average, participants altered their eye movements in the direction of the lightsâ location to a greater degree, about a quarter of the way, when the visual stimulus was presented as feedback than when it was presented at the same time as the sound, the study found.
"This is about the brain’s self-improvement skills," said co-author Daniel Pages, a graduate student in Psychology & Neuroscience at Duke. "What we’re getting at is how the brain uses different types of information to improve how it does its job. In this case, it uses vision coupled with eye movements to improve hearing."
"We were surprised at how important the eye movements were," Groh said. "But finding sounds is really hard. Feedback about your performance is important for anything that is difficult, whether it is the B- you get on your homework or the error your eyes detect in localizing a sound."
(Source: today.duke.edu)
When something gets in the way of our ability to see, we quickly pick up a new way to look, in much the same way that we would learn to ride a bike, according to a new study published in the Cell Press journal Current Biology on August 15.

Our eyes are constantly on the move, darting this way and that four to five times per second. Now researchers have found that the precise manner of those eye movements can change within a matter of hours. This discovery by researchers from the University of Southern California might suggest a way to help those with macular degeneration better cope with vision loss.
"The system that controls how the eyes move is far more malleable than the literature has suggested," says Bosco Tjan of the University of Southern California. "We showed that people with normal vision can quickly adjust to a temporary occlusion of their foveal vision by adapting a consistent point in their peripheral vision as their new point of gaze."
The fovea refers to the small, center-most portion of the retina, which is responsible for our high-resolution vision. We move our eyes to direct the fovea to different parts of a scene, constructing a picture of the world around us. In those with age-related macular degeneration, progressive loss of foveal vision leads to visual impairment and blindness.
In the new study, MiYoung Kwon, Anirvan Nandy, and Tjan simulated a loss of foveal vision in six normally sighted young adults by blocking part of a visual scene with a gray disc that followed the individuals’ eye gaze. Those individuals were then asked to complete demanding object-following and visual-search tasks. Within three hours of working on those tasks, people showed a remarkably fast and spontaneous adjustment of eye movements. Once developed, that change in their “point of gaze” was retained over a period of weeks and was reengaged whenever their foveal vision was blocked.
Tjan and his team say they were surprised by the rate of this adjustment. They note that patients with macular degeneration frequently do adapt their point of gaze, but in a process that takes months, not days or hours. They suggest that practice with a visible gray disc like the one used in the study might help speed that process of visual rehabilitation along. The discovery also reveals that the oculomotor (eye movement) system prefers control simplicity over optimality.
"Gaze control by the oculomotor system, although highly automatic, is malleable in the same sense that motor control of the limbs is malleable," Tjan says. "This finding is potentially very good news for people who lose their foveal vision due to macular diseases. It may be possible to create the right conditions for the oculomotor system to quickly adjust," Kwon adds.
(Source: eurekalert.org)
Quick eye movements, called saccades, that enable us to scan a visual scene appear to act as a metronome for pushing information about that scene into memory.
Scientists at Yerkes National Primate Research Center, Emory University, have observed that in monkeys exploring images with their eyes, the onset of a saccade resets the rhythms of electrical activity (theta oscillations) in the hippocampus, a region of the brain important for memory formation.
Tracking eye movements is already a promising basis for diagnosing brain disorders such as Alzheimer’s disease and schizophrenia. A deeper understanding of how the rhythm of eye movements orchestrate memories could bolster the accuracy and power of eye-tracking diagnoses.
The findings were published this week in Proceedings of the National Academy of Sciences, Early Edition.
Senior author Elizabeth Buffalo was a researcher at the Yerkes National Primate Research Center and an associate professor of neurology at Emory University School of Medicine and is currently associate professor of physiology and biophysics at Universpity of Washington in Seattle. The first author of the paper is postdoctoral fellow Michael Jutras„ who is now an instructor at the University of Washington.
Theta oscillations are cycles of electrical activity in the brain occurring between 3 to 12 times per second. Scientists have previously seen theta oscillations in the hippocampus in rodents, when the rodents were actively exploring, sniffing or feeling something with their whiskers.
"Both animals and humans seem to take in sensory information at this theta rhythm," Buffalo says. "But one striking difference between rodents and primates is the way they gather information about the external world. Rodents are much more reliant on the senses of smell and touch."
She says the actions that are most comparable to rodents’ sniffing and whiskering in primates are saccades. When our eyes scan text or explore a picture, the eyes’ focus tends to jump from point to point several times per second.
Buffalo and Jutras examined electrical signals in the hippocampi of two rhesus monkeys while the monkeys were looking at a variety of pictures and the researchers tracked their eye movements. The researchers observed that after a saccade, the electrical signals in the hippocampus display a more coherent rhythm.

The rhythm reset a saccade imposes may be a way to ensure the hippocampus is receptive to new sensory information, the researchers propose.
“The eye movements are acting like the conductor of the hippocampal orchestra,” Jutras says, “The phase reset might be a mechanism to ensure the ongoing theta rhythm is in sync with incoming visual information.”
Scientists have previously hypothesized that theta oscillations in the hippocampus set the stage for memory formation. The researchers tested this idea by presenting the monkeys each image twice during a viewing session. Because all primates have an innate preference for novelty, monkeys tend to spend a longer time looking at new images and less time looking at repeated ones. The researchers inferred that the monkeys had a stronger memory of a given picture if, upon second viewing, they looked through it quickly. The theta rhythm reset was more consistent during the viewing of images that the monkeys remembered well.
"Based on this finding, we concluded that this resetting of the theta rhythm is an important part of the memory process," Jutras says.
"This study has given us a better understanding of the function of the hippocampal theta rhythm, which has been well characterized in rodents but isn’t well understood in primates," he says. "A future goal is to investigate the relationship between hippocampal theta and eye movements during memory formation and navigation in humans. This could be possible with epilepsy patients who undergo monitoring of hippocampal activity as part of their treatment."
(Source: news.emory.edu)

Impaired visual signals might contribute to schizophrenia symptoms
By observing the eye movements of schizophrenia patients while playing a simple video game, a University of British Columbia researcher has discovered a potential explanation for some of their symptoms, including difficulty with everyday tasks.
The research, published in a recent issue of the Journal of Neuroscience, shows that, compared to healthy controls, schizophrenia patients had a harder time tracking a moving dot on the computer monitor with their eyes and predicting its trajectory. But the impairment of their eye movements was not severe enough to explain the difference in their predictive performance, suggesting a breakdown in their ability to interpret what they saw.
Lead author Miriam Spering, an assistant professor of ophthalmology and visual sciences, says the patients were having trouble generating or using an “efference copy” – a signal sent from the eye movement system in the brain indicating how much, and in what direction, their eyes have moved. The efference copy helps validate visual information from the eyes.
"An impaired ability to generate or interpret efference copies means the brain cannot correct an incomplete perception," says Spering, who conducted the dot-tracking experiments as a postdoctoral fellow at New York University, and is now conducting similar studies at UBC. The brain might fill in the blanks by extrapolating from prior experience, contributing to psychotic symptoms, such as hallucinations.
My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk.
"But just as a person might, through practice, improve their ability to predict the trajectory of a moving dot, a person might be able to improve their ability to generate or use that efference copy," Spering says. "My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk."
Face Identification Accuracy is in the Eye (and Brain) of the Beholder
Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.
"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.
The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.
"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.
So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.
Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.
But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.
The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.
The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.
"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."
Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.
Never forget a face? Researchers find women have better memory recall than men
New research from McMaster University suggests women can remember faces better than men, in part because they spend more time studying features without even knowing it, and a technique researchers say can help improve anyone’s memories.
The findings help to answer long-standing questions about why some people can remember faces easily while others quickly forget someone they’ve just met.
“The way we move our eyes across a new individual’s face affects our ability to recognize that individual later,” explains Jennifer Heisz, a research fellow at the Rotman Research Institute at Baycrest Health Sciences and newly appointed assistant professor in the Department of Kinesiology at McMaster University.
She co-authored the paper with David Shore, psychology professor at McMaster and psychology graduate student Molly Pottruff.
“Our findings provide new insights into the potential mechanisms of episodic memory and the differences between the sexes. We discovered that women look more at new faces than men do, which allows them to create a richer and more superior memory,” Heisz says.
Eye tracking technology was used to monitor where study participants looked—be it eyes, nose or mouth—while they were shown a series of randomly selected faces on a computer screen. Each face was assigned a name that participants were asked to remember.
One group was tested over the course of one day, another group tested over the course of four days.
“We found that women fixated on the features far more than men, but this strategy operates completely outside of our awareness. Individuals don’t usually notice where their eyes fixate, so it’s all subconscious.”
The implications are exciting, she says, because it means anyone can be taught to scan more and potentially have better memory.
“The results open the possibility that changing our eye movement pattern may lead to better memory,” says Shore. “Increased scanning may prove to be a simple strategy to improve face memory in the general population, especially for individuals with memory impairment like older adults.”