Posts tagged trustworthiness

Posts tagged trustworthiness
Our brains judge a face’s trustworthiness - Even when we can’t see it
Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.
“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.
“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.
The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.
The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.
To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.
These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.
Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.
In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.
Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.
“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”
Despite long experience with the ways of the world, older people are especially vulnerable to fraud. According to the Federal Trade Commission (FTC), up to 80% of scam victims are over 65. One explanation may lie in a brain region that serves as a built-in crook detector. Called the anterior insula, this structure—which fires up in response to the face of an unsavory character—is less active in older people, possibly making them less cagey than younger folks, a new study finds.
Both FTC and the Federal Bureau of Investigation have found that older people are easy marks due in part to their tendency to accentuate the positive. According to social neuroscientist Shelley Taylor of the University of California, Los Angeles, research backs up the idea that older people can put a positive spin on things—emotionally charged pictures, for example, and playing virtual games in which they risk the loss of money. “Older people are good at regulating their emotions, seeing things in a positive light, and not overreacting to everyday problems,” she says. But this trait may make them less wary.
To see if older people really are less able to spot a shyster, Taylor and colleagues showed photos of faces considered trustworthy, neutral, or untrustworthy to a group of 119 older adults (ages 55 to 84) and 24 younger adults (ages 20 to 42). Signs of untrustworthiness include averted eyes; an insincere smile that doesn’t reach the eyes; a smug, smirky mouth; and a backward tilt to the head. The participants were asked to rate each face on a scale from -3 (very untrustworthy) to 3 (very trustworthy).
In the study, appearing online in the Proceedings of the National Academy of Sciences, the “untrustworthy” faces were perceived as significantly more trustworthy by the older subjects than by the younger ones. The researchers then performed the same test on a different set of volunteers, this time imaging their brains during the process, to look for differences in brain activity between the age groups. In the younger subjects, when asked to judge whether the faces were trustworthy, the anterior insula became active; the activity increased at the sight of an untrustworthy face. The older people, however, showed little or no activation.
How non-verbal cues can predict a person’s (and a robot’s) trustworthiness
People face this predicament all the time—can you determine a person’s character in a single interaction? Can you judge whether someone you just met can be trusted when you have only a few minutes together? And if you can, how do you do it? Using a robot named Nexi, Northeastern University psychology professor David DeSteno and collaborators Cynthia Breazeal from MIT’s Media Lab and Robert Frank and David Pizarro from Cornell University have figured out the answer. The findings were recently published in the journal Psychological Science, a journal of the Association for Psychological Science.
It’s What You’re Not Saying…
In the absence of reliable information about a person’s reputation, nonverbal cues can offer a look into a person’s likely actions. This concept has been known for years, but the cues that convey trustworthiness or untrustworthiness have remained a mystery. Collecting data from face-to-face conversations with research participants where money was on the line, DeSteno and his team realized that it’s not one single non-verbal movement or cue that determines a person’s trustworthiness, but rather sets of cues. When participants expressed these cues, they cheated their partners more, and, at a gut level, their partners expected it. “Scientists haven’t been able to unlock the cues to trust because they’ve been going about it the wrong way,” DeSteno said. “There’s no one golden-cue. Context and coordination of movements is what matters.”
Robots Have Feelings, Too
People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. “The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”
Real-Life Application
This discovery has led the research team to not only answer enduring questions about if and how people are able to assess the trustworthiness of an unknown person, but also to show the human mind’s willingness to ascribe trust-related intentions to technological entities based on the same movements. “This is a very exciting result that showcases how social robots can be used to gain important insights about human behavior,” said Cynthia Breazeal of MIT’s Media Lab. “This also has fascinating implications for the design of future robots that interact and work alongside people as partners.” Accordingly, these findings hold important insights not only for security and financial endeavors and for the evolving design of robots and computer-based agents. The subconscious mind is ready to see these entities as social beings.