Posts tagged face processing

Posts tagged face processing
People who claim to see “Jesus in toast” may no longer be mocked in the future thanks to a new study by researchers at the University of Toronto and partner institutions in China.

Researchers have found that the phenomenon of “face pareidolia”– where onlookers report seeing images of Jesus, Virgin Mary, or Elvis in objects such as toasts, shrouds, and clouds — is normal and based on physical causes.
“Most people think you have to be mentally abnormal to see these types of images, so individuals reporting this phenomenon are often ridiculed”, says lead researcher Prof. Kang Lee of the University of Toronto’s Eric Jackman Institute of Child Study. “But our findings suggest that it’s common for people to see non-existent features because human brains are uniquely wired to recognize faces, so that even when there’s only a slight suggestion of facial features the brain automatically interprets it as a face,” said Lee.
Although this phenomenon has been known for centuries, little is understood about the underlying neural mechanisms that cause it. In the first study of its kind, researchers studied brain scans and behavioural responses to individuals seeing faces and letters in different patterns. They discovered face paredilia isn’t due to a brain anomaly or imagination but is caused by the combined work of the frontal cortex which helps generate expectations and sends signals to the posterior visual cortex to enhance the interpretation stimuli from the outside world.
Researchers also found that people can be led to see different images — such as faces or words or letters — depending on what they expect to see, which in turn activates specific parts of the brain that process such images. Seeing “Jesus in toast” reflects our brain’s normal functioning and the active role that the frontal cortex plays in visual perception. Instead of the phrase “seeing is believing” the results suggest that “believing is seeing.”
(Source: media.utoronto.ca)

Dogs recognize familiar faces from images
So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.
Main focus on spontaneous behavior of dogs
Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?
“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.
Dogs preferred faces of familiar conspecifics
The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.
Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.
In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.
This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.
Our sense of touch can contribute to our ability to perceive faces, according to new research published in Psychological Science, a journal of the Association for Psychological Science.
“In daily life, we usually recognize faces through sight and almost never explore them through touch,” says lead researcher Kazumichi Matsumiya of Tohoku University in Japan. “But we use information from multiple sensory modalities in order to perceive many everyday non-face objects and events, such as speech perception or object recognition — these new findings suggest that even face processing is essentially multisensory.”
In a series of studies, Matsumiya took advantage of a phenomenon called the “face aftereffect” to investigate whether our visual system responds to nonvisual signals for processing faces. Inthe face aftereffect, we adapt to a face with a particular expression — happiness, for example — which causes us to perceive a subsequent neutral face as having the opposite facial expression (i.e., sadness).
Matsumiya hypothesized that if the visual system really does respond to signals from another modality, then we should see evidence for face aftereffects from one modality to the other. So, adaptation to a face that is explored by touch should produce visual face aftereffects.
To test this, Matsumiya had participants explore face masks concealed below a mirror by touching them. After this adaptation period, the participants were visually presented with a series of faces that had varying expressions and were asked to classify the faces as happy or sad. The visual faces and the masks were created from the same exemplar.
In line with his hypothesis, Matsumiya found that participants’ experiences exploring the face masks by touch shifted their perception of the faces presented visually compared to participants who had no adaptation period, such that the visual faces were perceived as having the opposite facial expression.
Further experiments ruled out other explanations for the results, including the possibility that the face aftereffects emerged because participants were intentionally imagining visual faces during the adaptation period.
And a fourth experiment revealed that the aftereffect also works the other way: Visual stimuli can influence how we perceive a face through touch.
According to Matsumiya, current views on face processing assume that the visual system only receives facial signals from the visual modality — but these experiments suggest that face perception is truly crossmodal.
“These findings suggest that facial information may be coded in a shared representation between vision and haptics in the brain,” notes Matsumiya, suggesting that these findings may have implications for enhancing vision and telecommunication in the development of aids for the visually impaired.
A new study in Biological Psychiatry explores the influence of oxytocin
Difficulty in registering and responding to the facial expressions of other people is a hallmark of autism spectrum disorder (ASD). Relatedly, functional imaging studies have shown that individuals with ASD display altered brain activations when processing facial images.
The hormone oxytocin plays a vital role in the social interactions of both animals and humans. In fact, multiple studies conducted with healthy volunteers have provided evidence for beneficial effects of oxytocin in terms of increased trust, improved emotion recognition, and preference for social stimuli.
This combination of scientific work led German researchers to hypothesize about the influence of oxytocin in ASD. Dr. Gregor Domes, from the University of Freiburg and first author of the new study, explained: “In the present study, we were interested in the question of whether a single dose of oxytocin would change brain responses to social compared to non-social stimuli in individuals with autism spectrum disorder.”
They found that oxytocin did show an effect on social processing in the individuals with ASD, “suggesting that oxytocin may help to treat a basic brain function that goes awry in autism spectrum disorders,” commented Dr. John Krystal, Editor of Biological Psychiatry.
To conduct this study, they recruited fourteen individuals with ASD and fourteen control volunteers, all of whom completed a face- and house-matching task while undergoing imaging scans. Each participant completed this task and scanning procedure twice, once after receiving a nasal spray containing oxytocin and once after receiving a nasal spray containing placebo. The order of the sprays was randomized, and the tests were administered one week apart.
Using two sets of stimuli in the matching task, one of faces and one of houses, allowed the researchers to not only compare the effects of the oxytocin and placebo administrations, but also allowed them to discriminate findings between specific effects to only social stimuli and non-specific effects to more general brain processing.
What they found was intriguing. The data indicate that oxytocin specifically increases responses of the amygdala to social stimuli in individuals with ASD. The amygdala, the authors explain, “has been associated with processing of emotional stimuli, threat-related stimuli, face processing, and vigilance for salient stimuli”.
This finding suggests oxytocin might promote the salience of social stimuli in ASD. Increased salience of social stimuli might support behavioral training of social skills in ASD.
These data support the idea that oxytocin may be a promising approach in the treatment of ASD and could stimulate further research, even clinical trials, on the exploration of oxytocin as an add-on treatment for individuals with autism spectrum disorder.
(Source: alphagalileo.org)
A key brain structure that regulates emotions works differently in preschoolers with depression compared with their healthy peers, according to new research at Washington University School of Medicine in St. Louis.
The differences, measured using functional magnetic resonance imaging (fMRI), provide the earliest evidence yet of changes in brain function in young children with depression. The researchers say the findings could lead to ways to identify and treat depressed children earlier in the course of the illness, potentially preventing problems later in life.

“The findings really hammer home that these kids are suffering from a very real disorder that requires treatment,” said lead author Michael S. Gaffrey, PhD. “We believe this study demonstrates that there are differences in the brains of these very young children and that they may mark the beginnings of a lifelong problem.”
The study is published in the July issue of the Journal of the American Academy of Child & Adolescent Psychiatry.
Depressed preschoolers had elevated activity in the brain’s amygdala, an almond-shaped set of neurons important in processing emotions. Earlier imaging studies identified similar changes in the amygdala region in adults, adolescents and older children with depression, but none had looked at preschoolers with depression.
For the new study, scientists from Washington University’s Early Emotional Development Program studied 54 children ages 4 to 6. Before the study began, 23 of those kids had been diagnosed with depression. The other 31 had not. None of the children in the study had taken antidepressant medication.
Although studies using fMRI to measure brain activity by monitoring blood flow have been used for years, this is the first time that such scans have been attempted in children this young with depression. Movements as small as a few millimeters can ruin fMRI data, so Gaffrey and his colleagues had the children participate in mock scans first. After practicing, the children in this study moved less than a millimeter on average during their actual scans.
While they were in the fMRI scanner during the study, the children looked at pictures of people whose facial expressions conveyed particular emotions. There were faces with happy, sad, fearful and neutral expressions.
“The amygdala region showed elevated activity when the depressed children viewed pictures of people’s faces,” said Gaffrey, an assistant professor of psychiatry. “We saw the same elevated activity, regardless of the type of faces the children were shown. So it wasn’t that they reacted only to sad faces or to happy faces, but every face they saw aroused activity in the amygdala.”
Looking at pictures of faces often is used in studies of adults and older children with depression to measure activity in the amygdala. But the observations in the depressed preschoolers were somewhat different than those previously seen in adults, where typically the amygdala responds more to negative expressions of emotion, such as sad or fearful faces, than to faces expressing happiness or no emotion.
In the preschoolers with depression, all facial expressions were associated with greater amygdala activity when compared with their healthy peers.
Gaffrey said it’s possible depression affects the amygdala mainly by exaggerating what, in other children, is a normal amygdala response to both positive and negative facial expressions of emotion. But more research will be needed to prove that. He does believe, however, that the amygdala’s reaction to people’s faces can be seen in a larger context.
“Not only did we find elevated amygdala activity during face viewing in children with depression, but that greater activity in the amygdala also was associated with parents reporting more sadness and emotion regulation difficulties in their children,” Gaffrey said. “Taken together, that suggests we may be seeing an exaggeration of a normal developmental response in the brain and that, hopefully, with proper prevention or treatment, we may be able to get these kids back on track.”
(Source: news.wustl.edu)

Infants process faces long before they recognize other objects
New research from psychology Research Professor Anthony Norcia and postdoctoral fellow Faraz Farzin, both of the Stanford Vision and NeuroDevelopment Lab, suggests a physical basis for infants’ ogling. At as early as four months, babies’ brains already process faces at nearly adult levels, even while other images are still being analyzed in lower levels of the visual system.
The results fit, Farzin pointed out, with the prominent role human faces play in a baby’s world.
"If anything’s going to develop earlier it’s going to be face recognition," she said.
The paper appeared in the online Journal of Vision.
The researchers noninvasively measured electrical activity generated in the infants’ brains with a net of sensors placed over the scalp – a sort of electroencephalographic skullcap.
The sensors were monitoring what are called steady state visual potentials – spikes in brain activity elicited by visual stimulation. By flashing photographs at infants and adults and measuring their brain activity at the same steady rhythm – a technique Norcia has pioneered for over three decades – the researchers were able to “ask” the participants’ brains what they perceived.
When the experiment is conducted on adults, faces and objects (like a telephone or an apple) light up similar areas of the temporal lobe – a region of the brain devoted to higher-level visual processing.
Infants’ neural responses to faces were similar to those of adults, showing activity over a part of the temporal lobe researchers think is devoted to face processing.
Pokemon provides rare opening for IU study of face-recognition processes
At a Bloomington, Ind., toy store, kids ages 8 to 12 gather weekly to trade Pokemon cards and share their mutual absorption in the intrigue and adventure of Pokemon.
This may seem an unlikely source of material to test theories in cognitive neuroscience. But that is where Indiana University brain scientists Karin Harman James and Tom James were when an idea took hold.
"We were down at the club with our son, watching the way the kids talked about the cards, and noticed it was bigger than just a trading game," Tom James said.
Pokemon has since provided a rich testing ground for a theory of facial cognition that until now has been difficult to support. With the use of cutting-edge neuroimaging, the study challenges the prevailing theory of face recognition by offering new evidence for a theory that face recognition depends on a generalized system for recognizing objects, rather than a special area of the brain just for this function.
To Get the Best Look at a Person’s Face, Look Just Below the Eyes
They say that the eyes are the windows to the soul. However, to get a real idea of what a person is up to, according to UC Santa Barbara researchers Miguel Eckstein and Matt Peterson, the best place to check is right below the eyes. Their findings are published in the Proceedings of the National Academy of Science.
"It’s pretty fast, it’s effortless –– we’re not really aware of what we’re doing," said Miguel Eckstein, professor of psychology in the Department of Psychological & Brain Sciences. Using an eye tracker and more than 100 photos of faces and participants, Eckstein and graduate research assistant Peterson followed the gaze of the experiment’s participants to determine where they look in the first crucial moment of identifying a person’s identity, gender, and emotional state.
"For the majority of people, the first place we look at is somewhere in the middle, just below the eyes," Eckstein said. One possible reason could be that we are trained from youth to look there, because it’s polite in some cultures. Or, because it allows us to figure out where the person’s attention is focused.
However, Peterson and Eckstein hypothesize that, despite the ever-so-brief –– 250 millisecond –– glance, the relatively featureless point of focus, and the fact that we’re usually unaware that we’re doing it, the brain is actually using sophisticated computations to plan an eye movement that ensures the highest accuracy in tasks that are evolutionarily important in determining flight, fight, or love at first sight.