Posts tagged face recognition

Posts tagged face recognition
Researchers at The Ohio State University have found a way for computers to recognize 21 distinct facial expressions—even expressions for complex or seemingly contradictory emotions such as “happily disgusted” or “sadly angry.”

(Image caption: Researchers at the Ohio State University have found a way for computers to recognize 21 distinct facial expressions — even expressions for complex or seemingly contradictory emotions. The study gives cognitive scientists more tools to study the origins of emotion in the brain. Here, a study participant makes three faces: happy (left), disgusted (center), and happily disgusted (right). Credit: Image courtesy of The Ohio State University.)
In the current issue of the Proceedings of the National Academy of Sciences, they report that they were able to more than triple the number of documented facial expressions that researchers can now use for cognitive analysis.
“We’ve gone beyond facial expressions for simple emotions like ‘happy’ or ‘sad.’ We found a strong consistency in how people move their facial muscles to express 21 categories of emotions,” said Aleix Martinez, a cognitive scientist and associate professor of electrical and computer engineering at Ohio State. “That is simply stunning. That tells us that these 21 emotions are expressed in the same way by nearly everyone, at least in our culture.”
The resulting computational model will help map emotion in the brain with greater precision than ever before, and perhaps even aid the diagnosis and treatment of mental conditions such as autism and post-traumatic stress disorder (PTSD).
Face-blind people can learn to tell similar shapes apart
Study could support theory that the brain has specialized mechanisms for recognizing faces
People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an ‘expertise’ hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training.
Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members.
Why does it take longer to recognise a familiar face when seen in an unfamiliar setting, like seeing a work colleague when on holiday? A new study published today in Nature Communications has found that part of the reason comes down to the processes that our brain performs when learning and recognising faces.

During the experiment, participants were shown faces of people that they had never seen before, while lying inside an MRI scanner in the Department of Psychology at Royal Holloway. They were shown some of these faces numerous times from different angles and were asked to indicate whether they had seen that person before or not.
While participants were relatively good at recognising faces once they had seen them a few times, using a new mathematical approach, the scientists found that people’s decisions of whether they recognised someone were also dependent on the context in which they encountered the face. If participants had recently seen lots of unfamiliar faces, they were more likely to say that the face they were looking at was unfamiliar, even if they had seen the face several times before and had previously reported that they did recognise the face.
Activity in two areas of the brain matched the way in which the mathematical model predicted people’s performance.
“Our study has characterised some of the mathematical processes that are happening in our brain as we do this,” said lead author Dr Matthew Apps. “One brain area, called the fusiform face area, seems to be involved in learning new information about faces and increasing their familiarity.
“Another area, called the superior temporal sulcus, we found to have an important role in influencing our report of whether we recognise someone’s face, regardless of whether we are actually familiar with them or not. While this seems rather counter-intuitive, it may be an important mechanism for simplifying all the information that we need to process about faces.”
“Face recognition is a fundamental social skill, but we show how error prone this process can be. To recognise someone, we become familiar with their face, by learning a little more about what it looks like,” said co-author Professor Manos Tsakiris.
“At the same time, we often see people in different contexts. The recognition biases that we measured might give us an advantage in integrating information about identity and social context, two key elements of our social world.”
(Source: rhul.ac.uk)
Prosopagnosia (face blindness) may be temporarily improved following inhalation of the hormone oxytocin.

This is the finding of research led by Dr Sarah Bate and Dr Rachel Bennetts of the Centre for Face Processing Disorders at Bournemouth University that will be presented today, Friday 6 September, at the British Psychological Society’s Joint Cognitive and Developmental annual conference at the University of Reading.
Dr Bate explained: “Prosopagnosia is characterised by a severe impairment in face recognition, whereby a person cannot identify the faces of their family or friends, or even their own face”
The researchers tested twenty adults (10 with prosopagnosia and 10 control participants). Each participant visited the laboratory on two occasions, approximately two weeks apart. On one visit they inhaled the oxytocin nasal spray, and on the other visit they inhaled the placebo spray. The two sprays were prepared by an external pharmaceutical company in identical bottles, and neither the participants nor the researchers knew the identity of the sprays until the data had been analysed.
Regardless of which spray the person inhaled, the testing sessions had an identical format. Participants inhaled the spray, then sat quietly for 45 minutes to allow the spray to take effect. They then participated in two face processing tests: one testing their ability to remember faces and the other testing their ability to match faces of the same identity.
The researchers found that the participants with prosopagnosia achieved higher scores on both face processing tests in the oxytocin condition. Interestingly, no improvement was observed in the control participants, suggesting the hormone may be more effective in those with impaired face recognition systems.
The initial ten participants with prosopagnosia had a developmental form of the condition. Individuals with developmental prosopagnosia have never experienced brain damage, and this form of face blindness is thought to be very common, affecting one in 50 people. Much more rarely, people can acquire prosopagnosia following a brain injury. At a later date, the researchers had the opportunity to test one person with acquired prosopagnosia, and also observed a large improvement following oxytocin inhalation in this individual.
Dr Bate said: “This study provides the first evidence that oxytocin may be used to temporarily improve face recognition in people with either developmental or acquired prosopagnosia. The effects of the hormone are thought to last 2-3 hours, and it may be that the nasal spray can be used to improve face recognition on a special occasion. However, much more research needs to be carried out, as we don’t currently know whether there are benefits or risks associated with longer-term inhalation of the hormone.”
(Source: alphagalileo.org)
Face Identification Accuracy is in the Eye (and Brain) of the Beholder
Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.
"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.
The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.
"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.
So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.
Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.
But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.
The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.
The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.
"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."
Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.
The Centre for Face Processing Disorders at BU campaigns for greater recognition of face blindness
Imagine not being able to recognise your own child at nursery or even pick out your own face from a line-up of photos.
This is just how severe face blindness, or prosopagnosia, can be.
"In extreme cases, people might withdraw socially - become depressed, leave their job, or just suffer endless embarrassment," said Bournemouth University psychologist Dr Sarah Bate.
Dr Bate leads the Centre for Facial Processing Disorders at BU, which carries out research to advance understanding of the causes of prosopagnosia and develops training strategies that can help to improve face recognition skills.
The Centre is now campaigning for formal recognition of face blindness, and has launched an e-petition for the issue to be discussed in parliament.
"Children with prosopagnosia can find it really difficult to make friends because all children wear school uniforms in the UK - this takes away any external cues to recognition," said Dr Bate.
"If children with face blindness seem socially withdrawn, this is often misinterpreted as an indicator of other socio-emotional difficulties or behavioural problems because of the lack of professional awareness of prosopagnosia."
She added: “Because prosopagnosia is not a formally recognised disorder, many people are reluctant to inform their employer that they have the condition, despite it influencing their performance at work or their relations with colleagues and clients.
"Indeed, many people feel they would be discriminated against if managers became aware of their condition, and this may prevent promotion and impede other opportunities in the workplace."
You can sign the e-petition here
To find out more about face blindness and the work of the Centre for Face Processing Disorders visit: www.prosopagnosiaresearch.org
(Image: Allegro-Designs)
Why Do Age-Related Macular Degeneration Patients Have Trouble Recognizing Faces?
Abnormalities of eye movement and fixation may contribute to difficulty in perceiving and recognizing faces among older adults with age-related macular degeneration (AMD), suggests a study “Abnormal Fixation in Individuals with AMD when Viewing an Image of a Face” appearing in the January issue of Optometry and Vision Science, official journal of the American Academy of Optometry. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
Unlike people with normal vision focus, those with AMD don’t focus on “internal features” (the eyes, nose and mouth) when looking at the image of a face, according to the study by William Seiple, PhD, and colleagues of Lighthouse International, New York.
When Viewing Famous Face, AMD Patients Focus on External Features
The researchers used a sophisticated technique called optical coherence tomography/scanning laser ophthalmoscopy (OCT-SLO) to examine the interior of the eye in nine patients with AMD. Age-related macular degeneration is the leading cause of vision loss in older adults. It causes gradual destruction of the macula, leading to blurring and loss of central vision.
Previous studies have suggested that people with AMD have difficulty perceiving faces. To evaluate the possible role of abnormal eye movements, Dr Seiple and colleagues used the OCT-SLO equipment to make microscopic movies of the interior of the eye (fundus, including the retina and macula) as the patients viewed one of the world’s most famous faces: the Mona Lisa.
This technique allowed the researchers to record eye movements and where the patients looked (fixations) while looking at the face. They compared the findings in AMD patients to a control group of subjects with normal vision.
The results showed significant differences in eye movement patterns and fixations between groups. The AMD patients had fewer fixations on the internal features of the Mona Lisa’s face—eyes, nose, and mouth. For controls, an average of 87 percent of fixations were on internal features, compared to only 13 percent on external features. In contrast, for AMD patients, 62 percent of fixations were on internal features while 38 percent were on external features.
The normal controls also tended to make fewer and shorter eye movements (called saccades) than AMD patients. The differences between groups did not appear to be related to the blurring of vision associated with AMD.
Some older adults with AMD report difficulties perceiving faces. While the problem in “processing faces” is certainly related to the overall sensory visual loss, the new evidence suggests that specific patterns of eye movement abnormalities may also play a role.
Dr Seiple and colleagues note that “abnormal scanning patterns when viewing faces” have also been found in other conditions associated with difficulties in face perception, including autism, social phobias, and schizophrenia. The authors discuss the possible mechanisms of the abnormal scanning patterns in AMD, involving the complex interplay between the eyes and brain in governing eye movement and interpreting visual information.
A previous study suggested that drawing attention to specific characteristics—such as the internal facial features—may increase fixations on internal features and improve face perception. Dr Seiple and coauthors conclude, “That report gives hope that eye movement control training and training of allocation of attention could improve face perception and eye scanning behavior in individuals with AMD.”

Infants process faces long before they recognize other objects
New research from psychology Research Professor Anthony Norcia and postdoctoral fellow Faraz Farzin, both of the Stanford Vision and NeuroDevelopment Lab, suggests a physical basis for infants’ ogling. At as early as four months, babies’ brains already process faces at nearly adult levels, even while other images are still being analyzed in lower levels of the visual system.
The results fit, Farzin pointed out, with the prominent role human faces play in a baby’s world.
"If anything’s going to develop earlier it’s going to be face recognition," she said.
The paper appeared in the online Journal of Vision.
The researchers noninvasively measured electrical activity generated in the infants’ brains with a net of sensors placed over the scalp – a sort of electroencephalographic skullcap.
The sensors were monitoring what are called steady state visual potentials – spikes in brain activity elicited by visual stimulation. By flashing photographs at infants and adults and measuring their brain activity at the same steady rhythm – a technique Norcia has pioneered for over three decades – the researchers were able to “ask” the participants’ brains what they perceived.
When the experiment is conducted on adults, faces and objects (like a telephone or an apple) light up similar areas of the temporal lobe – a region of the brain devoted to higher-level visual processing.
Infants’ neural responses to faces were similar to those of adults, showing activity over a part of the temporal lobe researchers think is devoted to face processing.
Pokemon provides rare opening for IU study of face-recognition processes
At a Bloomington, Ind., toy store, kids ages 8 to 12 gather weekly to trade Pokemon cards and share their mutual absorption in the intrigue and adventure of Pokemon.
This may seem an unlikely source of material to test theories in cognitive neuroscience. But that is where Indiana University brain scientists Karin Harman James and Tom James were when an idea took hold.
"We were down at the club with our son, watching the way the kids talked about the cards, and noticed it was bigger than just a trading game," Tom James said.
Pokemon has since provided a rich testing ground for a theory of facial cognition that until now has been difficult to support. With the use of cutting-edge neuroimaging, the study challenges the prevailing theory of face recognition by offering new evidence for a theory that face recognition depends on a generalized system for recognizing objects, rather than a special area of the brain just for this function.

A new study shows that electrical stimulation of a small patch of the brain causes illusions that only affect the perception of faces. (Matt Cardy/Getty Images)
Ron Blackwell didn’t enter the hospital expecting to see his doctor’s face melt before his eyes. But that’s exactly what happened when researchers electrically stimulated a small part of his brain, according to a study published Tuesday in the Journal of Neuroscience.
The doctor’s face did not actually melt, of course. Instead, the researchers argue, the stimulation short-circuited a brain area called the fusiform gyrus. Previous studies have linked a part of that area to face processing by showing that it becomes active when people perceive faces. But it’s hard to know just how important the area is for facial processing unless you can actually change its activity level while someone views faces.
Blackwell, an epileptic, turned out to be the perfect test case. He was in Stanford’s hospital so that doctors — including the study author, Dr. Josef Parvizi — could study his epilepsy and decide whether they could perform surgery to remove the part of the brain responsible for his seizures. As part of that procedure, Parvizi laid down a strip of electrodes on the surface of the brain. That gave him the capacity to painlessly and harmlessly stimulate the part of the brain they covered, and one of those electrodes was right over the fusiform gyrus.
Along with collaborators led by Stanford psychologist Kalanit Grill-Spector, Parvizi stimulated the area to see whether it would affect Blackwell’s perception of the doctor’s face. When he performed a sham stimulation — counting down from three and pressing a button that did nothing — Blackwell reported no change.
But when Parvizi applied voltage, strange things suddenly began to happen to Blackwell’s face perception. “You just turned into somebody else,” Blackwell said in a video that was recorded as part of the experiment. “Your face metamorphosed. Your nose got saggy, went to the left. You almost looked like somebody I’d seen before, but somebody different. That was a trip.” As soon as the electricity was turned off, Blackwell’s visualization of Parvizi’s face returned to normal.
Later, Blackwell confirmed that it was only the doctor’s face that changed — his body and hands remained the same.
Though only a single case, the experiment provides strong confirmatory evidence that the fusiform gyrus is indeed directly involved in processing face perception, and that the area is specialized for doing so.
(Source: Los Angeles Times)