Neuroscience

Articles and news from the latest research reports.

Posts tagged facial recognition

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

347 notes

Facebook’s facial recognition software is now as accurate as the human brain, but what now?
Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.
The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.
Read more

Facebook’s facial recognition software is now as accurate as the human brain, but what now?

Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.

The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.

Read more

Filed under DeepFace facial recognition AI neural networks deep learning facebook technology neuroscience science

609 notes

Researchers identify gene that influences the ability to remember faces
New findings suggest the oxytocin receptor, a gene known to influence mother-infant bonding and pair bonding in monogamous species, also plays a special role in the ability to remember faces. This research has important implications for disorders in which social information processing is disrupted, including autism spectrum disorder. In addition, the finding may lead to new strategies for improving social cognition in several psychiatric disorders.
A team of researchers from Yerkes National Primate Research Center at Emory University in Atlanta, the University College London in the United Kingdom and University of Tampere in Finland made the discovery, which will be published in an online Early Edition of Proceedings of the National Academy of Sciences.
According to author Larry Young, PhD, of Yerkes, the Department of Psychiatry in Emory’s School of Medicine and Emory’s Center for Translational Social Neuroscience (CTSN), this is the first study to demonstrate that variation in the oxytocin receptor gene influences face recognition skills. He and co-author David Skuse point out the implication that oxytocin plays an important role in promoting our ability to recognize one another, yet about one-third of the population possesses only the genetic variant that negatively impacts that ability. They say this finding may help explain why a few people remember almost everyone they have met while others have difficulty recognizing members of their own family.
Skuse is with the Institute of Child Health, University College London, and the Great Ormond Street Hospital for Children, NHS Foundation Trust, London.
Young, Skuse and their research team studied 198 families with a single autistic child because these families were known to show a wide range of variability in facial recognition skills; two-thirds of the families were from the United Kingdom, and the remainder from Finland.
The Emory researchers previously found the oxytocin receptor is essential for olfactory-based social recognition in rodents, like mice and voles, and wondered whether the same gene could also be involved in human face recognition. They examined the influence of subtle differences in oxytocin receptor gene structure on face memory competence in the parents, non-autistic siblings and autistic child, and discovered a single change in the DNA of the oxytocin receptor had a big impact on face memory skills in the families. According to Young, this finding implies that oxytocin likely plays an important role more generally in social information processing, which is disrupted in disorders such as autism.
Additionally, this study is remarkable for its evolutionary aspect. Rodents use odors for social recognition while humans use visual facial cues. This suggests an ancient conservation in genetic and neural architectures involved in social information processing that transcends the sensory modalities used from mouse to man.
Skuse credits Young’s previous research that found mice with a mutated oxytocin receptor failed to recognize mice they previously encountered. “This led us to pursue more information about facial recognition and the implications for disorders in which social information processing is disrupted.” Young adds the team will continue working together to pursue strategies for improving social cognition in psychiatric disorders based on the current findings.

Researchers identify gene that influences the ability to remember faces

New findings suggest the oxytocin receptor, a gene known to influence mother-infant bonding and pair bonding in monogamous species, also plays a special role in the ability to remember faces. This research has important implications for disorders in which social information processing is disrupted, including autism spectrum disorder. In addition, the finding may lead to new strategies for improving social cognition in several psychiatric disorders.

A team of researchers from Yerkes National Primate Research Center at Emory University in Atlanta, the University College London in the United Kingdom and University of Tampere in Finland made the discovery, which will be published in an online Early Edition of Proceedings of the National Academy of Sciences.

According to author Larry Young, PhD, of Yerkes, the Department of Psychiatry in Emory’s School of Medicine and Emory’s Center for Translational Social Neuroscience (CTSN), this is the first study to demonstrate that variation in the oxytocin receptor gene influences face recognition skills. He and co-author David Skuse point out the implication that oxytocin plays an important role in promoting our ability to recognize one another, yet about one-third of the population possesses only the genetic variant that negatively impacts that ability. They say this finding may help explain why a few people remember almost everyone they have met while others have difficulty recognizing members of their own family.

Skuse is with the Institute of Child Health, University College London, and the Great Ormond Street Hospital for Children, NHS Foundation Trust, London.

Young, Skuse and their research team studied 198 families with a single autistic child because these families were known to show a wide range of variability in facial recognition skills; two-thirds of the families were from the United Kingdom, and the remainder from Finland.

The Emory researchers previously found the oxytocin receptor is essential for olfactory-based social recognition in rodents, like mice and voles, and wondered whether the same gene could also be involved in human face recognition. They examined the influence of subtle differences in oxytocin receptor gene structure on face memory competence in the parents, non-autistic siblings and autistic child, and discovered a single change in the DNA of the oxytocin receptor had a big impact on face memory skills in the families. According to Young, this finding implies that oxytocin likely plays an important role more generally in social information processing, which is disrupted in disorders such as autism.

Additionally, this study is remarkable for its evolutionary aspect. Rodents use odors for social recognition while humans use visual facial cues. This suggests an ancient conservation in genetic and neural architectures involved in social information processing that transcends the sensory modalities used from mouse to man.

Skuse credits Young’s previous research that found mice with a mutated oxytocin receptor failed to recognize mice they previously encountered. “This led us to pursue more information about facial recognition and the implications for disorders in which social information processing is disrupted.” Young adds the team will continue working together to pursue strategies for improving social cognition in psychiatric disorders based on the current findings.

Filed under oxytocin facial recognition memory ASD social cognition neuroscience science

246 notes

Dogs recognize familiar faces from images
So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.
Main focus on spontaneous behavior of dogs 
Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?
“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.
Dogs preferred faces of familiar conspecifics
The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.
Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.
In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.
This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Dogs recognize familiar faces from images

So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.

Main focus on spontaneous behavior of dogs

Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?

“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.

Dogs preferred faces of familiar conspecifics

The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.

Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.

In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.

This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Filed under dogs facial recognition eye movements face processing psychology neuroscience science

270 notes

Do Patients in a Vegetative State Recognize Loved Ones?

TAU researchers find unresponsive patients’ brains may recognize photographs of their family and friends

image

Patients in a vegetative state are awake, breathe on their own, and seem to go in and out of sleep. But they do not respond to what is happening around them and exhibit no signs of conscious awareness. With communication impossible, friends and family are left wondering if the patients even know they are there.

Now, using functional magnetic resonance imaging (fMRI), Dr. Haggai Sharon and Dr. Yotam Pasternak of Tel Aviv University’s Functional Brain Center and Sackler Faculty of Medicine and the Tel Aviv Sourasky Medical Center have shown that the brains of patients in a vegetative state emotionally react to photographs of people they know personally as though they recognize them.

"We showed that patients in a vegetative state can react differently to different stimuli in the environment depending on their emotional value," said Dr. Sharon. "It’s not a generic thing; it’s personal and autobiographical. We engaged the person, the individual, inside the patient."

The findings, published in PLOS ONE, deepen our understanding of the vegetative state and may offer hope for better care and the development of novel treatments. Researchers from TAU’s School of Psychological Sciences, Department of Neurology, and Sagol School of Neuroscience and the Loewenstein Hospital in Ranaana contributed to the research.

Talking to the brain

For many years, patients in a vegetative state were believed to have no awareness of self or environment. But in recent years, doctors have made use of fMRI to examine brain activity in such patients. They have found that some patients in a vegetative state can perform complex cognitive tasks on command, like imagining a physical activity such as playing tennis, or, in one case, even answering yes-or-no questions. But these cases are rare and don’t provide any indication as to whether patients are having personal emotional experiences in such a state.

To gain insight into “what it feels like to be in a vegetative state,” the researchers worked with four patients in a persistent (defined as “month-long”) or permanent (persisting for more than three months) vegetative state. They showed them photographs of people they did and did not personally know, then gauged the patients’ reactions using fMRI, which measures blood flow in the brain to detect areas of neurological activity in real time. In response to all the photographs, a region specific to facial recognition was activated in the patients’ brains, indicating that their brains had correctly identified that they were looking at faces.

But in response to the photographs of close family members and friends, brain regions involved in emotional significance and autobiographical information were also activated in the patients’ brains. In other words, the patients reacted with activations of brain centers involved in processing emotion, as though they knew the people in the photographs. The results suggest patients in a vegetative state can register and categorize complex visual information and connect it to memories – a groundbreaking finding.

The ghost in the machine

However, the researchers could not be sure if the patients were conscious of their emotions or just reacting spontaneously. So they then verbally asked the patients to imagine their parents’ faces. Surprisingly, one patient, a 60-year-old kindergarten teacher who was hit by a car while crossing the street, exhibited complex brain activity in the face- and emotion-specific brain regions, identical to brain activity seen in healthy people. The researchers say her response is the strongest evidence yet that vegetative-state patients can be “emotionally aware.” A second patient, a 23-year-old woman, exhibited activity just in the emotion-specific brain regions. (Significantly, both patients woke up within two months of the tests. They did not remember being in a vegetative state.)

"This experiment, a first of its kind, demonstrates that some vegetative patients may not only possess emotional awareness of the environment but also experience emotional awareness driven by internal processes, such as images," said Dr. Sharon.

Research focused on the “emotional awareness” of patients in a vegetative state is only a few years old. The researchers hope their work will eventually contribute to improved care and treatment. They have also begun working with patients in a minimally conscious state to better understand how regions of the brain interact in response to familiar cues. Emotions, they say, could help unlock the secrets of consciousness.

(Source: aftau.org)

Filed under vegetative state emotion neuroimaging brain activity facial recognition consciousness neuroscience science

73 notes

Facial Recognition is More Accurate in Photos Showing Whole Person

Subtle body cues allow people to identify others with surprising accuracy when faces are difficult to differentiate. This skill may help researchers improve person-recognition software and expand their understanding of how humans recognize each other.

A study published in Psychological Science by researchers at The University of Texas at Dallas demonstrates that humans rely on non-facial cues, such as body shape and build, to identify people in challenging viewing conditions, such as poor lighting.

“Psychologists and computer scientists have concentrated almost exclusively on the role of the face in person recognition,” explains lead researcher Allyson Rice. “Our results show that the body can also provide important and sometimes sufficient identity information for person recognition.”

During several experiments, researchers asked college-age participants to look at images of two people side-by-side and identify whether the images showed the same person. Some pairs looked similar despite showing different people, while other image pairs showed the same person with a different appearance. The researchers used computer face recognition systems to find pairs of pictures in which facial characteristics were difficult to use for identity.

Overall, participants accurately discerned whether the images showed the same person when they were provided complete images that showed both the face and body. Participants were just as accurate in identifying people in the image pairs when the faces were blocked out and only the bodies were shown. But, similarly to the computer-based face recognition system, participants had trouble identifying images of the subjects’ faces without their bodies.

image

Image: Above are pairs of photographs that face-recognition software failed to identify correctly. The top two photos are of the same person, while the bottom two photos are of different people

When asked, participants thought they were using primarily facial features to identify the subjects. To unravel the paradox, the researchers used eye-tracking equipment to determine where participants were actually looking. They found participants spent more time looking at the body whenever the face did not provide enough information to identify the subjects.

“People’s recognition strategies were inaccessible to their conscious awareness,” Rice said. “This provides a cautionary tale in ascribing credibility to people’s subjective reports of how they came to an identity decision.”

Dr. Alice O’Toole, Aage and Margareta Møller Professor in the School of Behavioral and Brain Sciences, has worked on facial recognition for over 15 years and supervised the project.

“Given the widespread use of face recognition systems in security settings, it is important for these systems to make use of all potentially helpful information,” O’Toole said. “Our work shows that the body can be surprisingly useful for identification, especially when the face fails to provide the necessary identity information.”

(Source: utdallas.edu)

Filed under facial recognition face perception body cues eye tracking conscious awareness psychology neuroscience science

377 notes

Meet London’s Babylab, where scientists experiment on babies’ brains
In the laboratories of the Henry Wellcome Building at Birkbeck, University of London, children’s squeaky toys lie scattered on the floor. Brightly coloured posters of animals are pasted on the walls and picture books are stacked on the low tables. This is the Babylab — a research centre that  experiments on children aged one month to three years, to understand how they learn, develop and think. “The way babies’ brains change is an amazing and mysterious process,” says the lab director, psychologist Mark Johnson. “The brain increases in size by three- to four-fold between birth and teenage years, but we don’t understand how that relates to its function.”
The Birkbeck neuroscientists are interested in finding out how babies recognise faces, how they learn to pay attention to some things and not others, how they perceive emotion and how their language develops. Studies published by the lab have shown that babies prefer to look at faces over objects. They have also found that differences in the dopamine-producing gene can affect babies’ attention span and that at six to eight months of age, there are detectable differences in the brain patterns of babies who were later  diagnosed with autism. 
The biggest obstacle is designing the right kinds of experiment. “There aren’t many methods for getting inside the mind of an infant or a toddler,” Johnson explains. Graduate students at the Babylab have teamed up with technology companies, using a €1.9 million (£1.7 million) grant from the European Union, to develop tools such as EEG head nets that record electrical brain activity, helmets that use light to measure blood flow in different parts of the brain, and eye-trackers that help study attention. Eventually, they want to create wireless systems so babies can react and play naturally during experiments. But despite the wires, “all our studies are geared towards making sure our babies are contented,” says Johnson. “If we want data, we need happy babies.”

Meet London’s Babylab, where scientists experiment on babies’ brains

In the laboratories of the Henry Wellcome Building at Birkbeck, University of London, children’s squeaky toys lie scattered on the floor. Brightly coloured posters of animals are pasted on the walls and picture books are stacked on the low tables. This is the Babylab — a research centre that experiments on children aged one month to three years, to understand how they learn, develop and think. “The way babies’ brains change is an amazing and mysterious process,” says the lab director, psychologist Mark Johnson. “The brain increases in size by three- to four-fold between birth and teenage years, but we don’t understand how that relates to its function.”

The Birkbeck neuroscientists are interested in finding out how babies recognise faces, how they learn to pay attention to some things and not others, how they perceive emotion and how their language develops. Studies published by the lab have shown that babies prefer to look at faces over objects. They have also found that differences in the dopamine-producing gene can affect babies’ attention span and that at six to eight months of age, there are detectable differences in the brain patterns of babies who were later diagnosed with autism.

The biggest obstacle is designing the right kinds of experiment. “There aren’t many methods for getting inside the mind of an infant or a toddler,” Johnson explains. Graduate students at the Babylab have teamed up with technology companies, using a €1.9 million (£1.7 million) grant from the European Union, to develop tools such as EEG head nets that record electrical brain activity, helmets that use light to measure blood flow in different parts of the brain, and eye-trackers that help study attention. Eventually, they want to create wireless systems so babies can react and play naturally during experiments. But despite the wires, “all our studies are geared towards making sure our babies are contented,” says Johnson. “If we want data, we need happy babies.”

Filed under babies babylab brain research facial recognition attention EEG neuroscience psychology science

73 notes

Difficulty in Recognizing Faces in Autism Linked to Performance in a Group of Neurons
Neuroscientists at Georgetown University Medical Center (GUMC) have discovered a brain anomaly that explains why some people diagnosed with autism cannot easily recognize faces — a deficit linked to the impairments in social interactions considered to be the hallmark of the disorder.
They also say that the novel neuroimaging analysis technique they developed to arrive at this finding is likely to help link behavioral deficits to differences at the neural level in a range of neurological disorders.
The final manuscript published March 15 in the online journal NeuroImage: Clinical, the scientists say that in the brains of many individuals with autism, neurons in the brain area that processes faces (the fusiform face area, or FFA) are too broadly “tuned” to finely discriminate between facial features of different people. They made this discovery using a form of functional magnetic resonance imaging (fMRI) that scans output from the blueberry-sized FFA, located behind the right ear.
“When your brain is processing faces, you want neurons to respond selectively so that each is picking up a different aspect of individual faces. The neurons need to be finely tuned to understand what is dissimilar from one face to another,” says the study’s senior investigator, Maximilian Riesenhuber, PhD, an associate professor of neuroscience at GUMC.
“What we found in our 15 adult participants with autism is that in those with more severe behavioral deficits, the neurons are more broadly tuned, so that one face looks more like another, as compared with the fine tuning seen in the FFA of typical adults,” he says.
“And we found evidence that reduced selectivity in FFA neurons corresponded to greater behavioral deficits in everyday face recognition in our participants. This makes sense. If your neurons cannot tell different faces apart, it makes it more difficult to tell who is talking to you or understand the facial expressions that are conveyed, which limits social interaction.”
Riesenhuber adds that there is huge variation in the ability of individuals diagnosed with autism to discriminate faces, and that some autistic people have no problem with facial recognition.
“But for those that do have this challenge, it can have substantial ramifications — some researchers believe deficits in face processing are at the root of social dysfunction in autism,” he says.
The neural basis for face processing
Neuroscientists have used traditional fMRI studies in the past to probe the neural bases of behavioral differences in people with autism, but these studies have produced conflicting results, says Riesenhuber.  “The fundamental problem with traditional fMRI techniques is that they can tell which parts of the brain become active during face processing, but they are poor at directly measuring neuronal selectivity,” he says, “and it is this neuronal selectivity that predicts face processing performance, as shown in our previous studies.”
To test their hypothesis that differences in neuronal selectivity in the FFA are foundational to differences in face processing abilities in autism, Riesenhuber and the study’s lead author, neuroscientist Xiong Jiang, PhD, developed a novel brain imaging analysis technique, termed local regional heterogeneity, to estimate neuronal selectivity.
“Local regional heterogeneity, or Hcorr, as we called it, is based on the idea that neurons that have similar selectivities will on average show similar responses, whereas neurons that like different stimuli will respond differently,” says Jiang. “This means that individuals with face processing deficits should show more homogeneous activity in their FFA than individuals with more typical face recognition abilities.”
They tested the method in 15 adults with autism and 15 adults without the disorder. The autistic participants also underwent a standard assessment of social/behavioral functioning.
The researchers found that in each autistic participant, behavioral ability to tell faces apart was tightly linked to levels of tuning specificity in the right FFA as estimated with Hcorr. This finding was confirmed by another advanced imaging technique, fMRI rapid adaptation, shown by the group in previous work to be a good estimator of neuronal selectivity.
“Compared to the more well-established fMRI-rapid adaptation technique, Hcorr has several significant advantages,” says Jiang. “Hcorr is more sensitive and can estimate neuronal selectivity as well as fMRI rapid adaptation, but with much shorter scans, and Hcorr can even estimate neuronal selectivity using data from resting state scans, thus making the technique suitable even for individuals that cannot perform complicated tasks in the scanner, such as low-functioning autistic adults, or young children.”
“The study suggests that, just as in typical adults, the FFA remains the key region responsible for face processing and that changes in neuronal selectivity in this area are foundational to the variability in face processing abilities found in autism. Our study identifies a clear target for intervention,” says Riesenhuber. Indeed, after the study was completed, the researchers successfully attempted to improve facial recognition skills in an autistic participant. They showed the participant pairs of faces that were very dissimilar at first, but became increasingly similar, and found that FFA tuning improved along with behavioral ability to tell the faces apart. “This suggests high-level brain areas may still be somewhat plastic in adulthood,” says Riesenhuber.

Difficulty in Recognizing Faces in Autism Linked to Performance in a Group of Neurons

Neuroscientists at Georgetown University Medical Center (GUMC) have discovered a brain anomaly that explains why some people diagnosed with autism cannot easily recognize faces — a deficit linked to the impairments in social interactions considered to be the hallmark of the disorder.

They also say that the novel neuroimaging analysis technique they developed to arrive at this finding is likely to help link behavioral deficits to differences at the neural level in a range of neurological disorders.

The final manuscript published March 15 in the online journal NeuroImage: Clinical, the scientists say that in the brains of many individuals with autism, neurons in the brain area that processes faces (the fusiform face area, or FFA) are too broadly “tuned” to finely discriminate between facial features of different people. They made this discovery using a form of functional magnetic resonance imaging (fMRI) that scans output from the blueberry-sized FFA, located behind the right ear.

“When your brain is processing faces, you want neurons to respond selectively so that each is picking up a different aspect of individual faces. The neurons need to be finely tuned to understand what is dissimilar from one face to another,” says the study’s senior investigator, Maximilian Riesenhuber, PhD, an associate professor of neuroscience at GUMC.

“What we found in our 15 adult participants with autism is that in those with more severe behavioral deficits, the neurons are more broadly tuned, so that one face looks more like another, as compared with the fine tuning seen in the FFA of typical adults,” he says.

“And we found evidence that reduced selectivity in FFA neurons corresponded to greater behavioral deficits in everyday face recognition in our participants. This makes sense. If your neurons cannot tell different faces apart, it makes it more difficult to tell who is talking to you or understand the facial expressions that are conveyed, which limits social interaction.”

Riesenhuber adds that there is huge variation in the ability of individuals diagnosed with autism to discriminate faces, and that some autistic people have no problem with facial recognition.

“But for those that do have this challenge, it can have substantial ramifications — some researchers believe deficits in face processing are at the root of social dysfunction in autism,” he says.

The neural basis for face processing

Neuroscientists have used traditional fMRI studies in the past to probe the neural bases of behavioral differences in people with autism, but these studies have produced conflicting results, says Riesenhuber.  “The fundamental problem with traditional fMRI techniques is that they can tell which parts of the brain become active during face processing, but they are poor at directly measuring neuronal selectivity,” he says, “and it is this neuronal selectivity that predicts face processing performance, as shown in our previous studies.”

To test their hypothesis that differences in neuronal selectivity in the FFA are foundational to differences in face processing abilities in autism, Riesenhuber and the study’s lead author, neuroscientist Xiong Jiang, PhD, developed a novel brain imaging analysis technique, termed local regional heterogeneity, to estimate neuronal selectivity.

“Local regional heterogeneity, or Hcorr, as we called it, is based on the idea that neurons that have similar selectivities will on average show similar responses, whereas neurons that like different stimuli will respond differently,” says Jiang. “This means that individuals with face processing deficits should show more homogeneous activity in their FFA than individuals with more typical face recognition abilities.”

They tested the method in 15 adults with autism and 15 adults without the disorder. The autistic participants also underwent a standard assessment of social/behavioral functioning.

The researchers found that in each autistic participant, behavioral ability to tell faces apart was tightly linked to levels of tuning specificity in the right FFA as estimated with Hcorr. This finding was confirmed by another advanced imaging technique, fMRI rapid adaptation, shown by the group in previous work to be a good estimator of neuronal selectivity.

“Compared to the more well-established fMRI-rapid adaptation technique, Hcorr has several significant advantages,” says Jiang. “Hcorr is more sensitive and can estimate neuronal selectivity as well as fMRI rapid adaptation, but with much shorter scans, and Hcorr can even estimate neuronal selectivity using data from resting state scans, thus making the technique suitable even for individuals that cannot perform complicated tasks in the scanner, such as low-functioning autistic adults, or young children.”

“The study suggests that, just as in typical adults, the FFA remains the key region responsible for face processing and that changes in neuronal selectivity in this area are foundational to the variability in face processing abilities found in autism. Our study identifies a clear target for intervention,” says Riesenhuber. Indeed, after the study was completed, the researchers successfully attempted to improve facial recognition skills in an autistic participant. They showed the participant pairs of faces that were very dissimilar at first, but became increasingly similar, and found that FFA tuning improved along with behavioral ability to tell the faces apart. “This suggests high-level brain areas may still be somewhat plastic in adulthood,” says Riesenhuber.

Filed under ASD autism memory fusiform gyrus FFA facial recognition neuroimaging neuroscience science

free counters