Neuroscience

Articles and news from the latest research reports.

Posts tagged face perception

370 notes

Why we can’t tell a Hollywood heartthrob from his stunt double

Johnny Depp has an unforgettable face. Tony Angelotti, his stunt double in “Pirates of the Caribbean,” does not. So why is it that when they’re swashbuckling on screen, audiences worldwide see them both as the same person? UC Berkeley scientists have cracked that mystery.

image

Researchers have pinpointed the brain mechanism by which we latch on to a particular face even when it changes. While it may seem as though our brain is tricking us into morphing, say, an actor with his stunt double, this “perceptual pull” is actually a survival mechanism, giving us a sense of stability, familiarity and continuity in what would otherwise be a visually chaotic world, researchers point out.

“If we didn’t have this bias of seeing a face as the same from one moment to the next, our perception of people would be very confusing. For example, a friend or relative would look like a completely different person with each turn of the head or change in light and shade,” said Alina Liberman, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Thursday, Oct. 2 in the online edition of the journal, Current Biology.

In searching for an exact match to a “target” face on a computer screen, study participants consistently identified a face that was not the target face, but a composite of the faces they had seen over the past few seconds. Moreover, participants judged the match to be more similar to the target face than it really was. The results help explain how humans process visual information from moment to moment to stabilize their environment.

“Our visual system loses sensitivity to stunt doubles in movies, but that’s a small price to pay for perceiving our spouse’s identity as stable,” said David Whitney,  a professor of psychology at UC Berkeley and senior author of the study.

Previous research in Whitney’s lab established the existence of a “Continuity Field” in which we visually meld similar objects seen within a 15-second time frame. For example, that study helped explain why we miss movie-mistake jump cuts, such as Harry Potter’s T-shirt abruptly changing from a crewneck into a henley shirt in the “Order of the Phoenix.”

This latest study builds on that by testing how a Continuity Field applies to our observation and recognition of faces, arguably one of the most important human social and perceptual functions, researchers said.

“Without the extraordinary ability to recognize faces, many social functions would be lost.Imagine picking up your child at school and not being able to recognize which kid is yours,” Whitney said. “Fortunately, this type of face blindness is rare. What is common, however, are changes in viewpoint, noise, blur, and lighting changes that could cause faces to appear very different from moment to moment. Our results suggest that the visual system is biased against such wavering perception in favor of continuity.”

To test this phenomenon, study participants viewed dozens of faces that varied in similarity. Each six seconds, a “target face” flashed on the computer screen for less than a second, followed by a series of faces that morphed with each click of an arrow key from one to the next. Participants clicked through the faces until they found the one that most closely matched the “target face.” Time and again, the face they picked was a combination of the two most recently seen target faces.

“Regardless of whether study participants cycled through many faces until they found a match or quickly named which face they saw, perception of a face was always pulled towards face identities they saw within the last 10 seconds,” Liberman said. “Importantly, if the faces that participants recently saw all looked very distinct, the visual system did not merge these identities together, indicating that this perceptual pull does depend on the similarity of recently seen faces.”

In a follow up experiment, the faces were viewed from different angles instead of frontal views to ensure that study participants were not latching on to a particular feature, say, bushy eyebrows or a distinct shadow across a cheekbone, but actually recognizing the entire visage.

“Sequential faces that are somewhat similar will display a much more striking family resemblance than is actually present, simply because of this Continuity Field for faces,” Liberman said.

Filed under visual system face perception perceptual continuity field neuroscience science

160 notes

Neurons See What We Tell Them to See
Neurons programmed to fire at specific faces—such as the famously reported “Jennifer Aniston neuron”—may be more in line with the conscious recognition of faces than the actual images seen. Subjects presented with a blended face, such as an amalgamation of Bill Clinton and George W. Bush, had significantly more firing of such face-specific neurons when they recognized the blended or morphed face as one person or the other. Results of the study led by Christof Koch at the Allen Institute for Brain Science, and carried out by neuroscientists Rodrigo Quian Quiroga at the University of Leicester, Alexander Kraskov at University College London and Florian Mormann at the University of Bonn, under the clinical supervision of the neurosurgeon Itzhak Fried at the University of California at Los Angeles Medical School, are published online today in the journal Neuron.
Some neurons in the region of the brain known as the medial temporal lobe are observed to be extremely selective in the stimuli to which they respond. A cell may only fire in response to different pictures of a particular person who is very familiar to the subject (such as loved one or a celebrity), the person’s written or spoken name, or simply recalling the person from memory.
“These highly specific cells are an entry point to investigate how the brain makes meaning out of visual information,” explains Christof Koch, Chief Scientific Officer at the Allen Institute for Brain Science and senior author on the paper. “We wanted to know how these cells responded not just to a simple image of a person’s face, but to a more ambiguous image of that face averaged or morphed with another person’s face.”
For the trials, subjects were shown either the face of individuals such as Bill Clinton or George W. Bush (the “adaptor” image), and then an ambiguous face which was a blend of both faces. Primed with the Clinton image, subjects tended to recognize Bush’s face in the blended image, while subjects who saw Bush’s face first recognized the blended face as Clinton. That is, even though the blended images were identical, subjects tended to consciously perceive the identity of face to which they were not adapted.
Researchers wanted to know whether these selective neurons responded to the actual image on the screen, or whether they responded more to the perception that the image caused in the brain of the subject. When subjects recognized the ambiguous face as belonging to Clinton, their Clinton-specific neurons fired. However, when subjects recognized that same face as Bush, the neurons fired significantly less. These results indicated that conscious recognition of the face played a crucial role in whether the neurons fired, rather than the raw visual stimulus.
“This study provides further evidence that stimulus-specific neurons in the medial temporal lobe follow the subjective perception of the person, as opposed to faithfully reporting the visual stimulus the person sees,” explains Koch. “This distinction may help us glean insight into how the brain takes raw visual information and transforms it into something meaningful, which can be further modulated by other aspects of experience in the brain.”

Neurons See What We Tell Them to See

Neurons programmed to fire at specific faces—such as the famously reported “Jennifer Aniston neuron”—may be more in line with the conscious recognition of faces than the actual images seen. Subjects presented with a blended face, such as an amalgamation of Bill Clinton and George W. Bush, had significantly more firing of such face-specific neurons when they recognized the blended or morphed face as one person or the other. Results of the study led by Christof Koch at the Allen Institute for Brain Science, and carried out by neuroscientists Rodrigo Quian Quiroga at the University of Leicester, Alexander Kraskov at University College London and Florian Mormann at the University of Bonn, under the clinical supervision of the neurosurgeon Itzhak Fried at the University of California at Los Angeles Medical School, are published online today in the journal Neuron.

Some neurons in the region of the brain known as the medial temporal lobe are observed to be extremely selective in the stimuli to which they respond. A cell may only fire in response to different pictures of a particular person who is very familiar to the subject (such as loved one or a celebrity), the person’s written or spoken name, or simply recalling the person from memory.

“These highly specific cells are an entry point to investigate how the brain makes meaning out of visual information,” explains Christof Koch, Chief Scientific Officer at the Allen Institute for Brain Science and senior author on the paper. “We wanted to know how these cells responded not just to a simple image of a person’s face, but to a more ambiguous image of that face averaged or morphed with another person’s face.”

For the trials, subjects were shown either the face of individuals such as Bill Clinton or George W. Bush (the “adaptor” image), and then an ambiguous face which was a blend of both faces. Primed with the Clinton image, subjects tended to recognize Bush’s face in the blended image, while subjects who saw Bush’s face first recognized the blended face as Clinton. That is, even though the blended images were identical, subjects tended to consciously perceive the identity of face to which they were not adapted.

Researchers wanted to know whether these selective neurons responded to the actual image on the screen, or whether they responded more to the perception that the image caused in the brain of the subject. When subjects recognized the ambiguous face as belonging to Clinton, their Clinton-specific neurons fired. However, when subjects recognized that same face as Bush, the neurons fired significantly less. These results indicated that conscious recognition of the face played a crucial role in whether the neurons fired, rather than the raw visual stimulus.

“This study provides further evidence that stimulus-specific neurons in the medial temporal lobe follow the subjective perception of the person, as opposed to faithfully reporting the visual stimulus the person sees,” explains Koch. “This distinction may help us glean insight into how the brain takes raw visual information and transforms it into something meaningful, which can be further modulated by other aspects of experience in the brain.”

Filed under neurons medial temporal lobe decision making face perception neuroscience science

158 notes

Faces Are More Likely to Seem Alive When We Want to Feel Connected

Feeling socially disconnected may lead us to lower our threshold for determining that another being is animate or alive, according to new research published in Psychological Science, a journal of the Association for Psychological Science.

image

“This increased sensitivity to animacy suggests that people are casting a wide net when looking for people they can possibly relate to — which may ultimately help them maximize opportunities to renew social connections,” explains psychological scientist and lead researcher Katherine Powers of Dartmouth College.

These findings enhance our understanding of the factors that contribute to face perception, mind perception, and social relationships, but they could also shed light on newer types of relationships that have emerged in the modern age, Powers argues, including our relationships with pets, online avatars, and even pieces of technology, such as computers, robots, and cell phones.

Feeling socially connected is a critical part of human life that impacts both mental and physical health; when we feel disconnected from others, we try to replenish our social connections.

“As social beings, we have an intrinsic motivation to pay attention to and connect with other people,” says Powers. “We wanted to examine the influence of this social motive on one of the most basic, low-level aspects of social perception: deciding whether or not a face is alive.”

Powers and colleagues had 30 college students view images of faces, which were actually morphs created by combining inanimate faces (such as a doll’s face) with human faces. The morphs ranged from 0% human to 100% human and showed both male and female faces.

The morphs were presented in random order and the students had to decide whether each face was animate or inanimate. Afterwards, they completed a survey that gauged their desire for social connections, in which they rated their agreement with statements such as “I want other people to accept me.”

The data revealed that desire for social connections was associated with a lower threshold for animacy. In other words, participants who had high scores on the social connections measure didn’t need to see as many human-like features in a face order to decide that it was alive.

To see if there might be a causal link, Powers and colleagues conducted another study in which they experimentally manipulated feelings of social connection.

A separate group of college students completed a personality questionnaire and were provided feedback ostensibly based on the questionnaire. In reality, the feedback was determined by random assignment. Some students were told that their future lives would be isolated and lonely, while others were told their lives would contain long-lasting, stable relationships. The feedback also included personality descriptions and statements tailored to each participant to ensure believability.

The students then viewed the face morphs.

As expected, students who had been told they would be isolated and lonely showed lower thresholds for animacy than those who were told they would have long-lasting relationships.

These findings are particularly interesting, the researchers argue, because previous research has shown that people are typically cautious in determining whether a face is alive:

“What’s really interesting here is the degree of variability in this perception,” says Powers. “Even though two people may be looking at the same face, the point at which they see life and decide that person is worthy of meaningful social interaction may not be the same — our findings show that it depends on an individual’s social relationship status and motivations for future social interactions.”

“I think the fact that we can observe such a bias in the perception of basic social cues really underscores the fundamental nature of the human need for social connection,” Powers adds.

Filed under face perception social perception social interaction psychology neuroscience science

405 notes

Our brains judge a face’s trustworthiness - Even when we can’t see it
Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.
“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.
“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.
The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.
The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.
To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.
These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.
Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.
In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.
Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.
“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Our brains judge a face’s trustworthiness - Even when we can’t see it

Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.

“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.

“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.

The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.

The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.

To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.

These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.

Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.

In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.

Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.

“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Filed under amygdala trustworthiness face perception brain activity psychology neuroscience science

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

161 notes

Pleasant Smells Increase Facial Attractiveness

New research from the Monell Chemical Senses Center reveals that women’s faces are rated as more attractive in the presence of pleasant odors. In contrast, odor pleasantness had less effect on the evaluation of age. The findings suggest that the use of scented products such as perfumes may, to some extent, alter how people perceive one another.

image

“Odor pleasantness and facial attractiveness integrate into one joint emotional evaluation,” said lead author Janina Seubert, PhD, a cognitive neuroscientist who was a postdoctoral fellow at Monell at the time the research was conducted. “This may indicate a common site of neural processing in the brain.”

Perfumes and scented products have been used for centuries as a way to enhance overall personal appearance. Previous studies had shown perception of facial attractiveness could be influenced when using unpleasant vs. pleasant odors. However, it was not known whether odors influence the actual visual perception of facial features or alternatively, how faces are emotionally evaluated by the brain.

The current study design centered on the principle that judging attractiveness and age involve two distinct perceptual processing methods: attractiveness is regarded as an emotional process while judgments of age are believed to be cognitive, or rationally-based.

In the study, published in open access journal PLOS ONE, 18 young adults, two thirds of whom were female, were asked to rate the attractiveness and age of eight female faces, presented as photographs. The images varied in terms of natural aging features.

While evaluating the images, one of five odors was simultaneously released. These were a blend of fish oil (unpleasant) and rose oil (pleasant) that ranged from predominantly fish oil to predominantly rose oil. The subjects were asked to rate the age of the face in the photograph, the attractiveness of the face and the pleasantness of the odor.

Across the range of odors, odor pleasantness directly influenced ratings of facial attractiveness. This suggests that olfactory and visual cues independently influence judgments of facial attractiveness.

With regard to the cognitive task of age evaluation, visual age cues (more wrinkles and blemishes) were linked to older age perception. However, odor pleasantness had a mixed effect. Visual age cues strongly influenced age perception during pleasant odor stimulation, making older faces look older and younger faces look younger. This effect was weakened in the presence of unpleasant odors, so that younger and older faces were perceived to be more similar in age.

Jean-Marc Dessirier, Lead Scientist at Unilever and a co-author on the study said, “These findings have fascinating implications in terms of how pleasant smells may help enhance natural appearance within social settings. The next step will be to see if the findings extend to evaluation of male facial attractiveness.”

(Source: monell.org)

Filed under facial attractiveness smell odor pleasantness sensory perception face perception psychology neuroscience science

73 notes

Facial Recognition is More Accurate in Photos Showing Whole Person

Subtle body cues allow people to identify others with surprising accuracy when faces are difficult to differentiate. This skill may help researchers improve person-recognition software and expand their understanding of how humans recognize each other.

A study published in Psychological Science by researchers at The University of Texas at Dallas demonstrates that humans rely on non-facial cues, such as body shape and build, to identify people in challenging viewing conditions, such as poor lighting.

“Psychologists and computer scientists have concentrated almost exclusively on the role of the face in person recognition,” explains lead researcher Allyson Rice. “Our results show that the body can also provide important and sometimes sufficient identity information for person recognition.”

During several experiments, researchers asked college-age participants to look at images of two people side-by-side and identify whether the images showed the same person. Some pairs looked similar despite showing different people, while other image pairs showed the same person with a different appearance. The researchers used computer face recognition systems to find pairs of pictures in which facial characteristics were difficult to use for identity.

Overall, participants accurately discerned whether the images showed the same person when they were provided complete images that showed both the face and body. Participants were just as accurate in identifying people in the image pairs when the faces were blocked out and only the bodies were shown. But, similarly to the computer-based face recognition system, participants had trouble identifying images of the subjects’ faces without their bodies.

image

Image: Above are pairs of photographs that face-recognition software failed to identify correctly. The top two photos are of the same person, while the bottom two photos are of different people

When asked, participants thought they were using primarily facial features to identify the subjects. To unravel the paradox, the researchers used eye-tracking equipment to determine where participants were actually looking. They found participants spent more time looking at the body whenever the face did not provide enough information to identify the subjects.

“People’s recognition strategies were inaccessible to their conscious awareness,” Rice said. “This provides a cautionary tale in ascribing credibility to people’s subjective reports of how they came to an identity decision.”

Dr. Alice O’Toole, Aage and Margareta Møller Professor in the School of Behavioral and Brain Sciences, has worked on facial recognition for over 15 years and supervised the project.

“Given the widespread use of face recognition systems in security settings, it is important for these systems to make use of all potentially helpful information,” O’Toole said. “Our work shows that the body can be surprisingly useful for identification, especially when the face fails to provide the necessary identity information.”

(Source: utdallas.edu)

Filed under facial recognition face perception body cues eye tracking conscious awareness psychology neuroscience science

65 notes

“Seeing” Faces Through Touch

Our sense of touch can contribute to our ability to perceive faces, according to new research published in Psychological Science, a journal of the Association for Psychological Science.

“In daily life, we usually recognize faces through sight and almost never explore them through touch,” says lead researcher Kazumichi Matsumiya of Tohoku University in Japan. “But we use information from multiple sensory modalities in order to perceive many everyday non-face objects and events, such as speech perception or object recognition — these new findings suggest that even face processing is essentially multisensory.”

In a series of studies, Matsumiya took advantage of a phenomenon called the “face aftereffect” to investigate whether our visual system responds to nonvisual signals for processing faces. Inthe face aftereffect, we adapt to a face with a particular expression — happiness, for example — which causes us to perceive a subsequent neutral face as having the opposite facial expression (i.e., sadness).

Matsumiya hypothesized that if the visual system really does respond to signals from another modality, then we should see evidence for face aftereffects from one modality to the other. So, adaptation to a face that is explored by touch should produce visual face aftereffects.

To test this, Matsumiya had participants explore face masks concealed below a mirror by touching them. After this adaptation period, the participants were visually presented with a series of faces that had varying expressions and were asked to classify the faces as happy or sad. The visual faces and the masks were created from the same exemplar.

In line with his hypothesis, Matsumiya found that participants’ experiences exploring the face masks by touch shifted their perception of the faces presented visually compared to participants who had no adaptation period, such that the visual faces were perceived as having the opposite facial expression.

Further experiments ruled out other explanations for the results, including the possibility that the face aftereffects emerged because participants were intentionally imagining visual faces during the adaptation period.

And a fourth experiment revealed that the aftereffect also works the other way: Visual stimuli can influence how we perceive a face through touch.

According to Matsumiya, current views on face processing assume that the visual system only receives facial signals from the visual modality — but these experiments suggest that face perception is truly crossmodal.

“These findings suggest that facial information may be coded in a shared representation between vision and haptics in the brain,” notes Matsumiya, suggesting that these findings may have implications for enhancing vision and telecommunication in the development of aids for the visually impaired.

Filed under face perception face processing face aftereffects adaptation psychology neuroscience science

134 notes

Face Identification Accuracy is in the Eye (and Brain) of the Beholder
Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.

"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.
The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.
"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.
So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.
Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.
But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.
The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.
The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.
"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."
Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.

Face Identification Accuracy is in the Eye (and Brain) of the Beholder

Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.

"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.

The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.

"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.

So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.

Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.

But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.

The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.

The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.

"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."

Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.

Filed under eye movements face recognition face perception psychology neuroscience science

1,138 notes

To Get the Best Look at a Person’s Face, Look Just Below the Eyes

They say that the eyes are the windows to the soul. However, to get a real idea of what a person is up to, according to UC Santa Barbara researchers Miguel Eckstein and Matt Peterson, the best place to check is right below the eyes. Their findings are published in the Proceedings of the National Academy of Science.

"It’s pretty fast, it’s effortless –– we’re not really aware of what we’re doing," said Miguel Eckstein, professor of psychology in the Department of Psychological & Brain Sciences. Using an eye tracker and more than 100 photos of faces and participants, Eckstein and graduate research assistant Peterson followed the gaze of the experiment’s participants to determine where they look in the first crucial moment of identifying a person’s identity, gender, and emotional state.

"For the majority of people, the first place we look at is somewhere in the middle, just below the eyes," Eckstein said. One possible reason could be that we are trained from youth to look there, because it’s polite in some cultures. Or, because it allows us to figure out where the person’s attention is focused.

However, Peterson and Eckstein hypothesize that, despite the ever-so-brief –– 250 millisecond –– glance, the relatively featureless point of focus, and the fact that we’re usually unaware that we’re doing it, the brain is actually using sophisticated computations to plan an eye movement that ensures the highest accuracy in tasks that are evolutionarily important in determining flight, fight, or love at first sight.

Filed under eye movements face perception face processing neuroscience psychology science

free counters