Posts tagged facial expressions

Posts tagged facial expressions
Inattention, hyperactivity, and impulsive behavior in children with ADHD can result in social problems and they tend to be excluded from peer activities. They have been found to have impaired recognition of emotional expression from other faces. The research group of Professor Ryusuke Kakigi of the National Institute for Physiological Sciences, National Institutes of Natural Sciences, in collaboration with Professor Masami K. Yamaguchi and Assistant Professor Hiroko Ichikawa of Chuo University first identified the characteristics of facial expression recognition of children with ADHD by measuring hemodynamic response in the brain and showed the possibility that the neural basis for the recognition of facial expression is different from that of typically developing children. The findings are discussed in Neuropsychologia (available online on Aug. 23, 2014).

The research group showed images of a happy expression or an angry expression to 13 children with ADHD and 13 typically developing children and identified the location of the brain activated at that time. They used non-invasive near-infrared spectroscopy to measure brain activity. Near-infrared light, which is likely to go through the body, was projected through the skull and the absorbed or scattered light was measured. The strength of the light depends on the concentration in “oxyhemoglobin” which gives the oxygen to the nerve cells working actively. The result was that typically developing children showed significant hemodynamic response to both the happy expression and angry expression in the right hemisphere of the brain. On the other hand, children with ADHD showed significant hemodynamic response only to the happy expression but brain activity specific for the angry expression was not observed. This difference in the neural basis for the recognition of facial expression might be responsible for impairment in social recognition and the establishment of peer-relationships.
(Source: eurekalert.org)
The next time you get really mad, take a look in the mirror. See the lowered brow, the thinned lips and the flared nostrils? That’s what social scientists call the “anger face,” and it appears to be part of our basic biology as humans.
Now, researchers at UC Santa Barbara and at Griffith University in Australia have identified the functional advantages that caused the specific appearance of the anger face to evolve. Their findings appear in the current online edition of the journal Evolution and Human Behavior.
“The expression is cross-culturally universal, and even congenitally blind children make this same face without ever having seen one,” said lead author Aaron Sell, a lecturer at the School of Criminology at Griffith University in Australia. Sell was formerly a postdoctoral scholar at UCSB’s Center for Evolutionary Psychology.
The anger expression employs seven distinct muscle groups that contract in a highly stereotyped manner. The researchers sought to understand why evolution chose those particular muscle contractions to signal the emotional state of anger.
The current research is part of a larger set of studies that examine the evolutionary function of anger. “Our earlier research showed that anger evolved to motivate effective bargaining behavior during conflicts of interest,” said Sell.
The greater the harm an individual can inflict, noted Leda Cosmides, the more bargaining power he or she wields. Cosmides, professor of psychology at UCSB, is a co-author on the study along with John Tooby, UCSB professor of anthropology. Cosmides and Tooby are co-directors of the campus’s Center for Evolutionary Psychology.
“This general bargaining-through-menace principle applies to humans as well,” said Tooby. “In earlier work we were able to confirm the predictions that stronger men anger more easily, fight more often, feel entitled to more unequal treatment, resolve conflicts more in their own favor and are even more in favor of military solutions than are physically weak men.”
Starting from the hypothesis that anger is a bargaining emotion, the researchers reasoned that the first step is communicating to the other party that the anger-triggering event is not acceptable, and the conflict will not end until an implicit agreement is reached. This, they say, is why the emotion of anger has a facial expression associated with it. “But the anger face not only signals the onset of a conflict,” said Sell. “Any distinctive facial display could do that. We hypothesized that the anger face evolved its specific form because it delivers something more for the expresser: Each element is designed to help intimidate others by making the angry individual appear more capable of delivering harm if not appeased.”
For our ancestors, Cosmides noted, greater upper body strength led to a greater ability to inflict harm; so the hypothesis was that the anger face should make a person appear stronger.
Using computer-generated faces, the researchers demonstrated that each of the individual components of the anger face made those computer-generated people appear physically stronger. For example, the most common feature of the anger face is the lowered brow. Researchers took a computerized image of an average human face and then digitally morphed it in two ways: One photo showed a lowered brow, and the other a raised brow. “With just this one difference, neither face appeared ‘angry,’ ” said Sell. “But when these two faces were shown to subjects, they reported the lowered brow face as looking like it belonged to a physically stronger man.”
The experiment was repeated one-by-one with each of the other major components of the classic anger face — raised cheekbones (as in a snarl), lips thinned and pushed out, the mouth raised (as in defiance), the nose flared and the chin pushed out and up. As predicted, the presence by itself of any one of these muscle contractions led observers to judge that the person making the face was physically stronger.
“Our previous research showed that humans are exceptionally good at assessing fighting ability just by looking at someone’s face,” said Sell. “Since people who are judged to be stronger tend to get their way more often, other things being equal, the researchers concluded that the explanation for evolution of the form of the human anger face is surprisingly simple — it is a threat display.”
These threat displays — like those of other animals — consist of exaggerations of cues of fighting ability, Sell continued. “So a man will puff up his chest, stand tall and morph his face to make himself appear stronger.
“The function of the anger face is intimidation,” added Cosmides, “just like a frog will puff itself up or a baboon will display its canines.”
As Tooby explained, “This makes sense of why evolution selected this particular facial display to co-occur with the onset of anger. Anger is triggered by the refusal to accept the situation, and the face immediately organizes itself to advertise to the other party the costs of not making the situation more acceptable. What is most pleasing about these results is that no feature of the anger face appears to be arbitrary; they all deliver the same message.”
According to Sell, the researchers know this to be true because each of the seven components has the same effect. “In the final analysis, you can think of the anger face as a constellation of features, each of which makes you appear physically more formidable.”
Researchers at The Ohio State University have found a way for computers to recognize 21 distinct facial expressions—even expressions for complex or seemingly contradictory emotions such as “happily disgusted” or “sadly angry.”

(Image caption: Researchers at the Ohio State University have found a way for computers to recognize 21 distinct facial expressions — even expressions for complex or seemingly contradictory emotions. The study gives cognitive scientists more tools to study the origins of emotion in the brain. Here, a study participant makes three faces: happy (left), disgusted (center), and happily disgusted (right). Credit: Image courtesy of The Ohio State University.)
In the current issue of the Proceedings of the National Academy of Sciences, they report that they were able to more than triple the number of documented facial expressions that researchers can now use for cognitive analysis.
“We’ve gone beyond facial expressions for simple emotions like ‘happy’ or ‘sad.’ We found a strong consistency in how people move their facial muscles to express 21 categories of emotions,” said Aleix Martinez, a cognitive scientist and associate professor of electrical and computer engineering at Ohio State. “That is simply stunning. That tells us that these 21 emotions are expressed in the same way by nearly everyone, at least in our culture.”
The resulting computational model will help map emotion in the brain with greater precision than ever before, and perhaps even aid the diagnosis and treatment of mental conditions such as autism and post-traumatic stress disorder (PTSD).
Computers See Through Faked Expressions of Pain Better Than People
A joint study by researchers at the University of California, San Diego and the University of Toronto has found that a computer system spots real or faked expressions of pain more accurately than people can.
The work, titled “Automatic Decoding of Deceptive Pain Expressions,” is published in the latest issue of Current Biology.
“The computer system managed to detect distinctive dynamic features of facial expressions that people missed,” said Marian Bartlett, research professor at UC San Diego’s Institute for Neural Computation and lead author of the study. “Human observers just aren’t very good at telling real from faked expressions of pain.”
Senior author Kang Lee, professor at the Dr. Eric Jackman Institute of Child Study at the University of Toronto, said “humans can simulate facial expressions and fake emotions well enough to deceive most observers. The computer’s pattern-recognition abilities prove better at telling whether pain is real or faked.”
The research team found that humans could not discriminate real from faked expressions of pain better than random chance – and, even after training, only improved accuracy to a modest 55 percent. The computer system attains an 85 percent accuracy.
“In highly social species such as humans,” said Lee, “faces have evolved to convey rich information, including expressions of emotion and pain. And, because of the way our brains are built, people can simulate emotions they’re not actually experiencing – so successfully that they fool other people. The computer is much better at spotting the subtle differences between involuntary and voluntary facial movements.”
“By revealing the dynamics of facial action through machine vision systems,” said Bartlett, “our approach has the potential to elucidate ‘behavioral fingerprints’ of the neural-control systems involved in emotional signaling.”
The single most predictive feature of falsified expressions, the study shows, is the mouth, and how and when it opens. Fakers’ mouths open with less variation and too regularly.
“Further investigations,” said the researchers, “will explore whether over-regularity is a general feature of fake expressions.”
In addition to detecting pain malingering, the computer-vision system might be used to detect other real-world deceptive actions in the realms of homeland security, psychopathology, job screening, medicine, and law, said Bartlett.
“As with causes of pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve ‘dual control’ of the face,” she said. “In addition, our computer-vision system can be applied to detect states in which the human face may provide important clues as to health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness, students’ expressions of attention and comprehension of lectures, or responses to treatment of affective disorders.”

Features like the wrinkles on your forehead and the way you move may reflect your overall health and risk of dying, according to recent health research. But do physicians consider such details when assessing patients’ overall health and functioning?
In a survey of approximately 1,200 Taiwanese participants, Princeton University researchers found that interviewers — who were not health professionals but were trained to administer the survey — provided health assessments that were related to a survey participant’s risk of dying, in part because they were attuned to facial expressions, responsiveness and overall agility.
The researchers report in the journal Epidemiology that these assessments were even more accurate predictors of dying than assessments made by physicians or even the individuals themselves. The findings show that survey interviewers, who typically spend a fair amount of time observing participants, can glean important information regarding participants’ health through thorough observations.
"Your face and body reveal a lot about your life. We speculate that a lot of information about a person’s health is reflected in their face, movements, speech and functioning, as well as in the information explicitly collected during interviews," said Noreen Goldman, Hughes-Rogers Professor of Demography and Public Affairs in the Woodrow Wilson School.
Together with lead author of the paper and Princeton Ph.D. candidate Megan Todd, Goldman analyzed data collected by the Social Environment and Biomarkers of Aging Study (SEBAS). This study was designed by Goldman and co-investigator Maxine Weinstein at Georgetown University to evaluate the linkages among the social environment, stress and health. Beginning in 2000, SEBAS conducted extensive home interviews, collected biological specimens and administered medical examinations with middle-aged and older adults in Taiwan. Goldman and Todd used the 2006 wave of this study, which included both interviewer and physician assessments, for their analysis. They also included death registration data through 2011 to ascertain the survival status of those interviewed.
The survey used in the study included detailed questions regarding participants’ health conditions and social environment. Participants’ physical functioning was evaluated through tasks that determined, for example, their walking speed and grip strength. Health assessments were elicited from participants, interviewers and physicians on identical five-point scales by asking “Regarding your/the respondent’s current state of health, do you feel it is excellent (5), good (4), average (3), not so good (2) or poor (1)?”
Participants answered this question near the beginning of the interview, before other health questions were asked. Interviewers assessed the participants’ health at the end of the survey, after administering the questionnaire and evaluating participants’ performance on a set of tasks, such as walking a short distance and getting up and down from a chair. And physicians — who were hired by the study and were not the participants’ primary care physicians — provided their assessments after physical exams and reviews of the participants’ medical histories. (Study investigators did not provide special guidance about how to rate overall health to any group.)
In order to understand the many variables that go into predicting mortality, Goldman and Todd factored into their statistical models such socio-demographic variables as sex, place of residence, education, marital status, and participation in social activities. They also considered chronic conditions, psychological wellbeing (such as depressive symptoms) and physical functioning to account for a fuller picture of health.
"Mortality is easy to measure because we have death records indicating when a person has died," Goldman said. "Overall health, on the other hand, is very complicated to measure but obviously very important for addressing health policy issues."
Two unexpected results emerged from Goldman and Todd’s analysis. The first: physicians’ ratings proved to be weak predictors of survival. “The physicians performed a medical exam equivalent to an annual physical exam, plus an abdominal ultrasound; they have specialized knowledge regarding health conditions,” Goldman explained. “Given access to such information, we anticipated stronger, more accurate predictions of death,” she said. “These results call into question previous studies’ assumptions that physicians’ ‘objective health’ ratings are superior to ‘subjective’ ratings provided by the survey participants themselves.”
In a second surprising finding, the team found that interviewers’ ratings were considerably more powerful for predicting mortality than self-ratings. This is likely, Goldman said, because interviewers considered respondents’ movements, appearance and responsiveness in addition to the detailed health information gathered during the interviews. Also, Goldman posits, interviewer ratings are probably less affected by bias than self-reports.
"The ‘self-rated health’ question is religiously used by health researchers and social scientists, and, although it has been shown to predict mortality, it suffers from many biases. People use it because it’s easy and simple,” Goldman continued. "But the problem with self-rated health is that we have no idea what reference group the respondent is using when evaluating his or her own health. Different ethnic and racial groups respond differently as do varying socioeconomic groups. We need other simple ways to rate individual health instead of relying so heavily on self-rated health."
One way, Goldman suggests, is by including interviewer ratings in surveys along with self-ratings: “This is a straightforward and cost-free addition to a questionnaire that is likely to improve our measurement of health in any population,” Goldman said.
(Source: wws.princeton.edu)

Is the human brain capable of identifying a fake smile?
Since Leonardo Da Vinci painted the Mona Lisa, much has been said about what lies behind her smile. Now, Spanish researchers have discovered how far this attention-grabbing expression confuses our emotion recognition and makes us perceive a face as happy, even if it is not.
Human beings deduce others´ state of mind from their facial expressions. “Fear, anger, sadness, displeasure and surprise are quickly inferred in this way,” David Beltrán Guerrero, researcher at the University of La Laguna, explains to SINC. But some emotions are more difficult to perceive.
“There is a wide range of more ambiguous expressions, from which it is difficult to deduce the underlying emotional state. A typical example is the expression of happiness,” says Beltrán, who is part of a group of experts at the Canarian institution who have analyzed, in three scientific articles, the smile’s capacity to distort people’s innate deductive ability.
“The smile plays a key role in recognizing others´ happiness. But, as we know, we are not really happy every time we smile,” he adds. In some cases, a smile merely expresses politeness or affiliation. In others, it may even be a way of hiding negative feelings and incentives, such as dominance, sarcasm, nervousness or embarrassment.
To develop this line of research, the authors created faces comprising smiling mouths and eyes expressing non-happy emotions, and compared them with faces in which both mouths and eyes expressed the same type of emotional state.
The main objective was to discover how far the smile skews the recognition of ambiguous expressions, making us identify them with happiness even though they are accompanied by eyes which clearly express a different feeling.
The power of a smile
“The influence of the smile is highly dependent on the type of task given to participants and, therefore, on the type of activity we are involved in when we come across this type of expression,” Beltrán notes.
Thus when the task is purely perceptive – like the detection of facial features - the smile has a very strong influence, to the extent that differences between ambiguous expressions (happy mouth and non-happy eyes) and genuinely happy expressions (happy mouth and eyes) are not distinguished.
On the other hand, when the task involves categorizing expressions, that is recognizing if they are happy, sad or any other emotion, the influence of the smile weakens, although it is still important, since 40% of the time, participants identify ambiguous expressions as genuinely happy.
However, the influence of the smile disappears in emotional assessment, that is when someone is asked to assess whether a facial expression is positive or negative: “A smile can cause us to interpret a non-happy expression as happy, except when we are involved in the emotional assessment of said expression,” he highlights.
A stimulus which is difficult to assess
According to the authors, the reason why a smile sometimes leads to the incorrect categorization of an expression is related to its high visual “salience”– its attention-grabbing capacity – and its almost exclusive association with the emotional state of happiness.
In a recent study, it was found that the smile dominates many of the initial stages of the brain processing of faces, to the extent that it prompts similar electrical activity in the brain for genuinely happy expressions and ambiguous expressions with smiles and non-happy eyes.
By measuring eye movements, it was observed that an ambiguous expression is confused and categorized as happy if the first gaze falls on the area of the smiling mouth, rather than the area of the eyes.
However, curiously the influence of the smile in these assessments is not the same for everyone. “Another study showed that people with social anxiety tend to confuse ambiguous expressions with genuinely happy expressions less frequently,” Beltrán concludes.
References:
Manuel G. Calvo, Hipólito Marrero, David Beltrán. “When does the brain distinguish between genuine and ambiguous smiles? An ERP study”. Brain and Cognition 81 (2013) 237–246.
Manuel G. Calvo, Andrés Fernández-Martín, Lauri Nummenmaa. “Perceptual, categorical, and affective processing of ambiguous smiling facial expressions”. Cognition 125 (2012) 373–393.
Manuel G. Calvo; Aida Gutiérrez-García; Pedro Avero; Daniel Lundqvist. “Attentional Mechanisms in Judging Genuine and Fake Smiles: Eye-Movement Patterns”. Emotion 2013, Vol. 13 (2013), No. 4, 792–802.
Unborn babies ‘practise’ facial expressions of pain in the womb, according to a study published today.

The researchers from Durham and Lancaster Universities suggest that fetuses’ ability to show a “pain” facial expression is a developmental process which could potentially give doctors another index of the health of a fetus.
The study is published in the prestigious academic journal, PLOS ONE, and was part funded by the Economic and Social Research Council (ESRC) and Durham University.
The study extends the findings of previous work demonstrating that the facial expressions of healthy fetuses develop and become more complex during pregnancy resulting in fetuses being able to show recognisable facial expressions.
The 4D scans of 15 healthy fetuses showed that they develop from making very simple one-dimensional expressions at 24 weeks, such as moving their lips in order to form a “smile”, to complex multi-dimensional expressions which can be recognised as “pain” expressions, by the time the mother is 36 weeks into her pregnancy.
The researchers suggest this is an adaptive process which enables the unborn baby to prepare themselves for life after birth when they have to communicate, for example if they feel hungry or uncomfortable, by making grimaces or crying.
The researchers used the video footage of 4D scans, observing repeatedly the facial expressions of eight female and seven male fetuses from the second to third trimester (24 to 36 weeks) of pregnancy.
Fetuses observed at 24 weeks gestation rarely showed a combination of facial movements which make up a ‘pain face’, such as lowering the eyebrows, wrinkling the nose and stretching the mouth. However, by 36 weeks gestation, a combination of at least four movements was seen rather more frequently, giving the impression that these older fetuses were capable of making a pain face.
Lead researcher Dr Nadja Reissland, of Durham University’s Department of Psychology, said: “It is vital for infants to be able to show pain as soon as they are born so that they can communicate any distress or pain they might feel to their carers and our results show that healthy fetuses ‘learn’ to combine the necessary facial movements before they are born.
“This suggests that we can determine the normal development of facial movements and potentially identify abnormal development too. This could then provide a further medical indication of the health of the unborn baby.
“It is not yet clear whether fetuses can actually feel pain, nor do we know whether facial expressions relate to how they feel. Our research indicates that the expression of fetal facial movements is a developmental process which seems to be related to brain maturation rather than being linked to feelings.”
Professor of Social Statistics at Lancaster University Brian Francis said: “Modern methods of data analysis enable the development of fetal pain faces to be clearly detected, with the complexity of facial movements making up a pain face increasing in the third trimester”.
Despite the advances in medical science, we still do not know very much about health indicators of fetal development or any warning signs of delayed or abnormal progress in the womb.
It is hoped that further research will test whether the development of facial expressions is delayed if fetuses experience unhealthy conditions in the womb, such as effects of smoking or alcohol, or where the fetus is undergoing invasive procedures.
(Source: dur.ac.uk)
Machine Perception Lab Shows Robotic One-Year-Old on Video
The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies “learn” to control their bodies and to interact with other people.
Diego-san’s hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.
Movellan directs the Institute for Neural Computation’s Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov’s Movement Control Laboratory at the University of Washington.
Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.
"We’ve made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the social-interaction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.
While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san’s expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on YouTube.
“This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research,” wrote Hanson. “With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people.”
Diego-san is the next step in the development of “emotionally relevant” robotics, building on Hanson’s previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.
Monkey See, Monkey Do: Visual Feedback Is Necessary for Imitating Facial Expressions
Studies of the chameleon effect confirm what salespeople, tricksters, and Lotharios have long known: Imitating another person’s postures and expressions is an important social lubricant.
But how do we learn to imitate with any accuracy when we can’t see our own facial expressions and we can’t feel the facial expressions of others?
Richard Cook of City University London, Alan Johnston of University College London, and Cecilia Heyes of the University of Oxford investigate possible mechanisms underlying our ability to imitate in two studies published in Psychological Science, a journal of the Association for Psychological Science.
In the first experiment, the researchers videotaped participants as they recited jokes and then asked them to imitate four randomly selected facial expressions from their videos. When they achieved what they perceived to be the target expression, the participants recorded the attempt with the click of a computer mouse.
A computer program evaluated the accuracy of participants’ imitation attempts against a map of the target expression. In contrast to previous studies that relied on subjective assessments, this new technology allowed for automated and objective measurement of imitative accuracy.
In one experiment, the researchers found that participants who were able to see their imitation attempts through visual feedback improved over successive attempts. But participants who had to rely solely on proprioception – sensing the relative position of their facial features – got progressively worse.
These results are consistent with the associative sequence-learning model, which holds that our ability to imitate accurately depends on learned associations between what we see (in the mirror or through feedback from others) and what we feel.
Cook and colleagues conclude that contingent visual feedback may be a useful component of rehabilitation and skill-training programs that are designed to improve individuals’ ability to imitate facial gestures.
Photographer Volker Gutgessell has spent the last four years visiting Frankfurt Zoo capturing these sensitive images of bonobos, gorillas and orangutans. Standing for several hours a day, the 58-year-old has documented the behaviours and expressions of his subjects - despite suffering chronic back pain caused by a severe slipped disc. Volker also developed tinnitus as a result of his injury, causing a constant ringing in his ears. But despite his condition, he has found a way of communicating through his pictures and picks up on the body language of his ape “models” while shooting them.
(Source: telegraph.co.uk)