Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

227 notes

Study shows moving together builds bonds from the time we learn to walk
Whether they march in unison, row in the same boat or dance to the same song, people who move in time with one another are more likely to bond and work together afterward.
It’s a principle established by previous studies, but now researchers at McMaster have shown that moving in time with others even affects the social behaviour of babies who have barely learned to walk.
“Moving in sync with others is an important part of musical activities,” says Laura Cirelli, lead author of a paper now posted online and scheduled to appear in an upcoming issue of the journal Developmental Science. “These effects show that movement is a fundamental part of music that affects social behavior from a very young age.”
Cirelli and her colleagues in the Department of Psychology, Neuroscience & Behaviour showed that 14-month-old babies were much more likely to help another person after the experience of bouncing up and down in time to music with that person.
Cirelli and fellow doctoral student Kate Einarson worked under the supervision of Professor Laurel Trainor, a specialist in child development research.
They tested 68 babies in all, to see if bouncing to music with another person makes a baby more likely to assist that person by handing back “accidentally” dropped objects.
Working in pairs, one researcher held a baby in a forward-facing carrier and stood facing the second researcher. When the music started to play, both researchers would gently bounce up and down, one bouncing the baby with them. Some babies were bounced in sync with the researcher across from them, and others were bounced at a different tempo.
When the song was over, the researcher who had been facing the baby then performed several simple tasks, including drawing a picture with a marker. While drawing the picture, she would pretend to drop the marker to see whether the infant would pick it up and hand it back to her – a classic test of altruism in babies.
The babies who had been bounced in time with the researcher were much more likely to toddle over, pick up the object and pass it back to the researcher, compared to infants who had been bounced at a different tempo than the experimenter.
While babies who had been bounced out of sync with the researcher only picked up and handed back 30 per cent of the dropped objects, in-sync babies came to the researcher’s aid 50 per cent of the time. The in-sync babies also responded more quickly.
The findings suggest that when we sing, clap, bounce or dance in time to music with our babies, these shared experiences of synchronous movement help form social bonds between us and our babies.
It’s a significant finding, Cirelli believes, because it shows that moving together to music with others encourages the development of altruistic helping behaviour among those in a social group. It suggests that music is an important part of day care and kindergarten curriculums because it helps to build a co-operative social climate.
Cirelli is now researching whether the experience of synchronous movement with one person leads babies to extend their increased helpfulness to other people or whether infants reserve their altruistic behaviour for their dancing partners.

Study shows moving together builds bonds from the time we learn to walk

Whether they march in unison, row in the same boat or dance to the same song, people who move in time with one another are more likely to bond and work together afterward.

It’s a principle established by previous studies, but now researchers at McMaster have shown that moving in time with others even affects the social behaviour of babies who have barely learned to walk.

“Moving in sync with others is an important part of musical activities,” says Laura Cirelli, lead author of a paper now posted online and scheduled to appear in an upcoming issue of the journal Developmental Science. “These effects show that movement is a fundamental part of music that affects social behavior from a very young age.”

Cirelli and her colleagues in the Department of Psychology, Neuroscience & Behaviour showed that 14-month-old babies were much more likely to help another person after the experience of bouncing up and down in time to music with that person.

Cirelli and fellow doctoral student Kate Einarson worked under the supervision of Professor Laurel Trainor, a specialist in child development research.

They tested 68 babies in all, to see if bouncing to music with another person makes a baby more likely to assist that person by handing back “accidentally” dropped objects.

Working in pairs, one researcher held a baby in a forward-facing carrier and stood facing the second researcher. When the music started to play, both researchers would gently bounce up and down, one bouncing the baby with them. Some babies were bounced in sync with the researcher across from them, and others were bounced at a different tempo.

When the song was over, the researcher who had been facing the baby then performed several simple tasks, including drawing a picture with a marker. While drawing the picture, she would pretend to drop the marker to see whether the infant would pick it up and hand it back to her – a classic test of altruism in babies.

The babies who had been bounced in time with the researcher were much more likely to toddle over, pick up the object and pass it back to the researcher, compared to infants who had been bounced at a different tempo than the experimenter.

While babies who had been bounced out of sync with the researcher only picked up and handed back 30 per cent of the dropped objects, in-sync babies came to the researcher’s aid 50 per cent of the time. The in-sync babies also responded more quickly.

The findings suggest that when we sing, clap, bounce or dance in time to music with our babies, these shared experiences of synchronous movement help form social bonds between us and our babies.

It’s a significant finding, Cirelli believes, because it shows that moving together to music with others encourages the development of altruistic helping behaviour among those in a social group. It suggests that music is an important part of day care and kindergarten curriculums because it helps to build a co-operative social climate.

Cirelli is now researching whether the experience of synchronous movement with one person leads babies to extend their increased helpfulness to other people or whether infants reserve their altruistic behaviour for their dancing partners.

Filed under infants prosocial behavior motor synchrony child development psychology neuroscience science

150 notes

Exposure to TV Violence Related to Irregular Attention and Brain Structure

Young adult men who watched more violence on television showed indications of less mature brain development and poorer executive functioning, according to the results of an Indiana University School of Medicine study published online in the journal Brain and Cognition.

image

The researchers used psychological testing and MRI scans to measure mental abilities and volume of brain regions in 65 healthy males with normal IQ between the age of 18 and 29, specifically chosen because they were not frequent video game players.

Lead author Tom A. Hummer, Ph.D., assistant research professor in the IU Department of Psychiatry, said the young men provided estimates of their television viewing over the past year and then kept a detailed diary of their TV viewing for a week. Participants also completed a series of psychological tests measuring inhibitory control, attention and memory. At the conclusion, MRI scans were used to measure brain structure.

Executive function is the broad ability to formulate plans, make decisions, reason and problem-solve, regulate attention, and inhibit behavior in order to achieve goals.

"We found that the more violent TV viewing a participant reported, the worse they performed on tasks of attention and cognitive control," Dr. Hummer said. "On the other hand, the overall amount of TV watched was not related to performance on any executive function tests."

Dr. Hummer noted that these executive functioning abilities can be important for controlling impulsive behaviors, including aggression. “The worry is that more impulsivity does not mix well with the behaviors modeled in violent programming.”

Tests that measured working memory, another subtype of executive functioning, were not found to be related to overall or violent TV viewing.

Comparing TV habits to brain images also produced results that Dr. Hummer and colleagues believe are significant.

"When we looked at the brain scans of young men with higher violent television exposure, there was less volume of white matter connecting the frontal and parietal lobes, which can be a sign of less maturity in brain development," he said.

White matter is tissue in the brain that insulates nerve fibers connecting different brain regions, making functioning more efficient. In typical development, the amount or volume of white matter increases as the brain makes more connections until about age 30, improving communication between regions of the brain. Connections between the frontal and parietal lobes are thought to be especially important for executive functioning.

"The take-home message from this study is the finding of a relationship between how much violent television we watch and important aspects of brain functioning like controlled attention and inhibition," Dr. Hummer said.

Dr. Hummer cautions that more research is needed to better understand the study findings.

"With this study we could not isolate whether people with poor executive function are drawn to programs with more violence or if the content of the TV viewing is responsible for affecting the brain’s development over a period of time," Dr. Hummer said. "Additional longitudinal work is necessary to resolve whether individuals with poor executive function and slower white matter growth are more drawn to violent programming or if exposure to media violence modifies development of cognitive control," Dr. Hummer said.

(Source: newswise.com)

Filed under executive function television media violence white matter brain structure psychology neuroscience science

180 notes

Does the moon affect our sleep?
Popular beliefs about the influence of the moon on humans widely exist. Many people report sleeplessness around the time of full moon. In contrast to earlier studies, scientists from the Max Planck Institute of Psychiatry in Munich did not observe any correlation between human sleep and the lunar phases. The researchers analyzed preexisting data of a large cohort of volunteers and their sleep nights. Further identification of mostly unpublished null findings suggests that the conflicting results of previous studies might be due to a publication bias.
For centuries, people have believed that the moon cycle influences human health, behavior and physiology. Folklore mainly links the full moon with sleeplessness. But what about the scientific background?
Several studies searched in re-analyses of pre-existing datasets on human sleep for a lunar effect, although the results were quite varying and the effects on sleep have rarely been assessed with objective measures, such as a sleep EEG. In some studies women appeared more affected by the moon phase, in others men. Two analyses of datasets from 2013 and 2014, each including between 30 and 50 volunteers, agreed on shorter total sleep duration in the nights around full moon. However, both studies came to conflicting results in other variables. For example, in one analysis the beginning of the REM-sleep phase in which we mainly dream was delayed around new moon, whereas the other study observed the longest delay around full moon.
To overcome the problem of possible chance findings in small study samples, scientists now analyzed the sleep data of overall 1,265 volunteers during 2,097 nights. “Investigating this large cohort of test persons and sleep nights, we were unable to replicate previous findings,” states Martin Dresler, neuroscientist at the Max Planck Institute of Psychiatry in Munich, Germany, and the Donders Institute for Brain, Cognition and Behaviour in Nijmegen, Netherlands. “We could not observe a statistical relevant correlation between human sleep and the lunar phases.” Further, his team identified several unpublished null findings including cumulative analyses of more than 20,000 sleep nights, which suggest that the conflicting results might be an example of a publication bias (i.e. the file drawer problem).
The file drawer problem describes the phenomenon, that many studies may be conducted but never reported – they remain in the file drawer. One much-discussed publication bias in science, medicine and pharmacy is the tendency to report experimental results that are positive or show a significant finding and to omit results that are negative or inconclusive.
Up to now, the influence of the lunar cycle on human sleep was investigated in re-analyses of earlier studies which originally followed different purposes. “To overcome the obvious limitations of retrospective data analysis, carefully controlled studies specifically designed for the test of lunar cycle effects on sleep in large samples are required for a definite answer,” comments Dresler.

Does the moon affect our sleep?

Popular beliefs about the influence of the moon on humans widely exist. Many people report sleeplessness around the time of full moon. In contrast to earlier studies, scientists from the Max Planck Institute of Psychiatry in Munich did not observe any correlation between human sleep and the lunar phases. The researchers analyzed preexisting data of a large cohort of volunteers and their sleep nights. Further identification of mostly unpublished null findings suggests that the conflicting results of previous studies might be due to a publication bias.

For centuries, people have believed that the moon cycle influences human health, behavior and physiology. Folklore mainly links the full moon with sleeplessness. But what about the scientific background?

Several studies searched in re-analyses of pre-existing datasets on human sleep for a lunar effect, although the results were quite varying and the effects on sleep have rarely been assessed with objective measures, such as a sleep EEG. In some studies women appeared more affected by the moon phase, in others men. Two analyses of datasets from 2013 and 2014, each including between 30 and 50 volunteers, agreed on shorter total sleep duration in the nights around full moon. However, both studies came to conflicting results in other variables. For example, in one analysis the beginning of the REM-sleep phase in which we mainly dream was delayed around new moon, whereas the other study observed the longest delay around full moon.

To overcome the problem of possible chance findings in small study samples, scientists now analyzed the sleep data of overall 1,265 volunteers during 2,097 nights. “Investigating this large cohort of test persons and sleep nights, we were unable to replicate previous findings,” states Martin Dresler, neuroscientist at the Max Planck Institute of Psychiatry in Munich, Germany, and the Donders Institute for Brain, Cognition and Behaviour in Nijmegen, Netherlands. “We could not observe a statistical relevant correlation between human sleep and the lunar phases.” Further, his team identified several unpublished null findings including cumulative analyses of more than 20,000 sleep nights, which suggest that the conflicting results might be an example of a publication bias (i.e. the file drawer problem).

The file drawer problem describes the phenomenon, that many studies may be conducted but never reported – they remain in the file drawer. One much-discussed publication bias in science, medicine and pharmacy is the tendency to report experimental results that are positive or show a significant finding and to omit results that are negative or inconclusive.

Up to now, the influence of the lunar cycle on human sleep was investigated in re-analyses of earlier studies which originally followed different purposes. “To overcome the obvious limitations of retrospective data analysis, carefully controlled studies specifically designed for the test of lunar cycle effects on sleep in large samples are required for a definite answer,” comments Dresler.

Filed under sleep lunar phases EEG moon cycle psychology neuroscience science

119 notes

New Study Shows Limited Motor Skills In Early Infancy May Be Trait of Autism
Researchers from Kennedy Krieger Institute in Baltimore, Md., announced findings that provide evidence for reduced grasping and fine motor activity among six-month-old infants with an increased familial risk for autism spectrum disorders (ASD). The research, which was published in Child Development, has important implications for our overall understanding of ASDs. Furthermore, the results suggest that subtle lags in object exploration-related motor skills in early infancy may present an ASD endophenotype - a heritable characteristic that may have genetic relation to ASD without predicting a full diagnosis- and further our understanding of the genes involved in the disorder.
“Among the infants with familial history of ASD, many were shown to have reduced fine motor skills regardless of eventual ASD diagnosis,” says Dr. Rebecca Landa, lead author and director of Kennedy Krieger’s Center for Autism and Related Disorders. “This means that reduced fine motor skills could be an ASD endophenotype without predicting full diagnosis. Identifying potential endophenotypes has important implications for future research and may improve our understanding of the neurobiology and genetics of ASDs.”
Researchers conducted two experiments examining the correlation of early motor development and object exploration in children with low risk (LR) or high risk (HR) of developing an ASD. Researchers measured key early learning skills, such as object manipulation and grasping activity, in infants at six months of age and again at 10 months. While all infants scored within the expected range and showed no difference in terms of their object manipulation, there were subtle signs that showed reduced grasping activity in HR infants as compared to their LR age-peers. These findings demonstrate that regardless of developmental outcomes, early motor skill differences in HR infants may represent an endophenotype that can be linked to ASD.
About Experiment 1
In experiment 1, participants included 129 infants, largely consisting of infant siblings of children with confirmed ASD diagnoses. During the testing period, most participants were six months old and were then followed longitudinally to the age of 36 months. Infants completed an assessment using the Mullen Scales of Early Learning (MSEL), which is a standardized assessment tool providing scores in five categories: Gross Motor (GM); Fine Motor (FM); Visual Reception (VR); Receptive Language (RL); and Expressive Language (EL). Based on the results of this assessment, infants were then divided into four groups : low-risk (LR) infants without ASD; high-risk (HR) infants without ASD, language, or social delays; HR infants showing language or social delays but not ASD; and HR infants with autism or ASD diagnosis. All children in the HR ASD group met DSM-IV diagnostic criteria for the disorder.
All four groups in Experiment 1 scored within the typical range on the MSEL subtests, meaning that none exhibited a clinical delay in their overall fine motor development at age six months. Subtle differences between HR and LR infants emerged even in HR infants who did not receive a diagnosis of ASD or other delays by age 36 months, which suggests that lower fine motor scores on the MSEL are characteristic of infants at high familial risk for ASD. In order to examine whether the HR infants would catch up to the LR infants in time, researchers conducted a second experiment with new participants.
About Experiment 2
Experiment 2 focused on a new group of six-month-old infants in both LR and HR categories and examined only their grasping behaviors in a naturalistic, free-play context, which was an important factor that emerged in Experiment 1. Participants included 42 infants who were siblings of children with ASD. The infants were observed in an unstructured play session.
The results of Experiment 2 showed reduced grasping and object exploration activity in six-month-old infants at HR for ASD. Overall, the MSEL FM T-score results observed in Experiment 2 show a similar pattern as in Experiment 1, but statistical results are somewhat weakened by an effect of gender in the LR sample. Unique to Experiment 2, was the sole focus on object manipulation-related items of the MSEL, which offered a consistent measure to identify differences between HR and LR infants. Reduced grasping activity in HR infants at age 6 months was also observed during an unstructured free-play task in Experiment 2, which provides additional evidence for the findings observed in Experiment 1. However, the HR infants caught up to the LR group in grasping, as measured in this study, by 10 months of age.
Future studies are needed to examine these preliminary findings more closely to specifically assess grasping ability in infants that receive an ASD diagnosis later in life.
(Image: Bigstock)

New Study Shows Limited Motor Skills In Early Infancy May Be Trait of Autism

Researchers from Kennedy Krieger Institute in Baltimore, Md., announced findings that provide evidence for reduced grasping and fine motor activity among six-month-old infants with an increased familial risk for autism spectrum disorders (ASD). The research, which was published in Child Development, has important implications for our overall understanding of ASDs. Furthermore, the results suggest that subtle lags in object exploration-related motor skills in early infancy may present an ASD endophenotype - a heritable characteristic that may have genetic relation to ASD without predicting a full diagnosis- and further our understanding of the genes involved in the disorder.

“Among the infants with familial history of ASD, many were shown to have reduced fine motor skills regardless of eventual ASD diagnosis,” says Dr. Rebecca Landa, lead author and director of Kennedy Krieger’s Center for Autism and Related Disorders. “This means that reduced fine motor skills could be an ASD endophenotype without predicting full diagnosis. Identifying potential endophenotypes has important implications for future research and may improve our understanding of the neurobiology and genetics of ASDs.”

Researchers conducted two experiments examining the correlation of early motor development and object exploration in children with low risk (LR) or high risk (HR) of developing an ASD. Researchers measured key early learning skills, such as object manipulation and grasping activity, in infants at six months of age and again at 10 months. While all infants scored within the expected range and showed no difference in terms of their object manipulation, there were subtle signs that showed reduced grasping activity in HR infants as compared to their LR age-peers. These findings demonstrate that regardless of developmental outcomes, early motor skill differences in HR infants may represent an endophenotype that can be linked to ASD.

About Experiment 1

In experiment 1, participants included 129 infants, largely consisting of infant siblings of children with confirmed ASD diagnoses. During the testing period, most participants were six months old and were then followed longitudinally to the age of 36 months. Infants completed an assessment using the Mullen Scales of Early Learning (MSEL), which is a standardized assessment tool providing scores in five categories: Gross Motor (GM); Fine Motor (FM); Visual Reception (VR); Receptive Language (RL); and Expressive Language (EL). Based on the results of this assessment, infants were then divided into four groups : low-risk (LR) infants without ASD; high-risk (HR) infants without ASD, language, or social delays; HR infants showing language or social delays but not ASD; and HR infants with autism or ASD diagnosis. All children in the HR ASD group met DSM-IV diagnostic criteria for the disorder.

All four groups in Experiment 1 scored within the typical range on the MSEL subtests, meaning that none exhibited a clinical delay in their overall fine motor development at age six months. Subtle differences between HR and LR infants emerged even in HR infants who did not receive a diagnosis of ASD or other delays by age 36 months, which suggests that lower fine motor scores on the MSEL are characteristic of infants at high familial risk for ASD. In order to examine whether the HR infants would catch up to the LR infants in time, researchers conducted a second experiment with new participants.

About Experiment 2

Experiment 2 focused on a new group of six-month-old infants in both LR and HR categories and examined only their grasping behaviors in a naturalistic, free-play context, which was an important factor that emerged in Experiment 1. Participants included 42 infants who were siblings of children with ASD. The infants were observed in an unstructured play session.

The results of Experiment 2 showed reduced grasping and object exploration activity in six-month-old infants at HR for ASD. Overall, the MSEL FM T-score results observed in Experiment 2 show a similar pattern as in Experiment 1, but statistical results are somewhat weakened by an effect of gender in the LR sample. Unique to Experiment 2, was the sole focus on object manipulation-related items of the MSEL, which offered a consistent measure to identify differences between HR and LR infants. Reduced grasping activity in HR infants at age 6 months was also observed during an unstructured free-play task in Experiment 2, which provides additional evidence for the findings observed in Experiment 1. However, the HR infants caught up to the LR group in grasping, as measured in this study, by 10 months of age.

Future studies are needed to examine these preliminary findings more closely to specifically assess grasping ability in infants that receive an ASD diagnosis later in life.

(Image: Bigstock)

Filed under ASD autism motor control motor activity infants psychology neuroscience science

117 notes

Distracted minds still see blurred lines

From animated ads on Main Street to downtown intersections packed with pedestrians, the eyes of urban drivers have much to see.

But while city streets have become increasingly crowded with distractions, our ability to process visual information has remained unchanged for millions of years. Can modern eyes keep up?

Encouragingly, a new study suggests that even as we’re processing a million things at once, we are still sensitive to certain kinds of changes in our visual environment — even while performing a difficult task.

In a paper published in Visual Cognition, researchers from Concordia University, Kansas State University, the University of Findlay, the University of Central Florida and the University of Illinois prove that we can automatically detect changes in blur across our field of view.

To investigate, the research team focused on the common problem of blurred sight, which can be caused by factors like changes in distance between objects, as well as vision disorders like near-sightedness, far-sightedness and astigmatism.

“Blur is normally compensated for by adjusting the lens of the eye to bring the image back into focus,” says study co-author Aaron Johnson, a professor in the Department of Psychology at Concordia.

“We wanted to know if the detection of this blur by the brain happens automatically, because previous research had resulted in two conflicting views.”

Those views suggest:

  1. Blur-detection requires mental effort: By focusing your attention on a blurry object in your peripheral vision, you can bring the object into focus — as though you were focusing a camera manually.
  2. Blur-detection is automatic: When the brain encounters blurred vision, it automatically compensates — as though you were using a camera with a permanent autofocus function.

“If blur is detected automatically and doesn’t require attention, then performing another cognitive task  — driving, say — at the same time shouldn’t change our ability to detect the blur,” Johnson says.

To determine which of these two theories was correct, he and his colleagues used a new technique that presented different amounts of blur to various regions of the eye.

The researchers showed study participants (individuals with normal, or corrected-to-normal, vision) 1,296 distinct images — pictures of things ranging from forests to building interiors — and used a window that moved based on the their eye movements to give the pictures two levels of resolution.

As they changed the resolution from blurry to sharp, the researchers gave participants mental tasks of varying degree of difficulty. Regardless of the difficulty levels, though, the subjects’ ability to detect blur in these pictures was unchanged.

“Our study proves that, much like other simple visual features such as colour and size, blur in an image doesn’t seem to require mental effort to detect,” Johnson says.

“The process may be what we call ‘pre-attentive’ — that is, little or no attention is required to detect it. As such, this research provides insight into a key task, compensating for blur, that the visual system must perform on a daily basis. In the future, I hope to study how blur detection changes with age.”

(Source: concordia.ca)

Filed under object recognition visual system categorization blurred vision psychology neuroscience science

279 notes

Anxious Children have Bigger “Fear Centers” in the Brain

The amygdala is a key “fear center” in the brain. Alterations in the development of the amygdala during childhood may have an important influence on the development of anxiety problems, reports a new study in the current issue of Biological Psychiatry.

image

Researchers at the Stanford University School of Medicine recruited 76 children, 7 to 9 years of age, a period when anxiety-related traits and symptoms can first be reliably identified. The children’s parents completed assessments designed to measure the anxiety levels of the children, and the children then underwent non-invasive magnetic resonance imaging (MRI) scans of brain structure and function.

The researchers found that children with high levels of anxiety had enlarged amygdala volume and increased connectivity with other brain regions responsible for attention, emotion perception, and regulation, compared to children with low levels of anxiety. They also developed an equation that reliably predicted the children’s anxiety level from the MRI measurements of amygdala volume and amygdala functional connectivity.

The most affected region was the basolateral portion of the amygdala, a subregion of the amygdala implicated in fear learning and the processing of emotion-related information.

“It is a bit surprising that alterations to the structure and connectivity of the amygdala were so significant in children with higher levels of anxiety, given both the young age of the children and the fact that their anxiety levels were too low to be observed clinically,” commented Dr. Shaozheng Qin, first author on this study.

Dr. John Krystal, Editor of Biological Psychiatry, commented, “It is critical that we move from these interesting cross-sectional observations to longitudinal studies, so that we can separate the extent to which larger and better connected amygdalae are risk factors or consequences of increased childhood anxiety.”

“However, our study represents an important step in characterizing altered brain systems and developing predictive biomarkers in the identification for young children at risk for anxiety disorders,” Qin added. “Understanding the influence of childhood anxiety on specific amygdala circuits, as identified in our study, will provide important new insights into the neurodevelopmental origins of anxiety in humans.”

(Source: elsevier.com)

Filed under amygdala anxiety fear children emotion psychology neuroscience science

980 notes

When good people do bad things
When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.
“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”
Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.
Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.
In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.
“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.
Group dynamics
Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”
The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.
“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”
Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.
A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”
When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”
The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.
Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.
“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”
Getting lost
The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.
“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.
Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

Filed under prefrontal cortex social cognition intergroup competition psychology neuroscience science

120 notes

Neural reward response may demonstrate why quitting smoking is harder for some
For some cigarette smokers, strategies to aid quitting work well, while for many others no method seems to work. Researchers have now identified an aspect of brain activity that helps to predict the effectiveness of a reward-based strategy as motivation to quit smoking.
The researchers observed the brains of nicotine-deprived smokers with functional magnetic resonance imaging (fMRI) and found that those who exhibited the weakest response to rewards were also the least willing to refrain from smoking, even when offered money to do so.
"We believe that our findings may help to explain why some smokers find it so difficult to quit smoking," said Stephen J. Wilson, assistant professor of psychology, Penn State. "Namely, potential sources of reinforcement for giving up smoking — for example, the prospect of saving money or improving health — may hold less value for some individuals and, accordingly, have less impact on their behavior."
The researchers recruited 44 smokers to examine striatal response to monetary reward in those expecting to smoke and in those who were not, and the subsequent willingness of the smokers to forego a cigarette in an effort to earn more money.
"The striatum is part of the so-called reward system in the brain," said Wilson. "It is the area of the brain that is important for motivation and goal-directed behavior — functions highly relevant to addiction."
The participants, who were between the ages of 18 and 45, all reported that they smoked at least 10 cigarettes per day for the past 12 months. They were instructed to abstain from smoking and from using any products containing nicotine for 12 hours prior to arriving for the experiment.
Each participant spent time in an fMRI scanner while playing a card-guessing game with the potential to win money. The participants were informed that they would have to wait approximately two hours, until the experiment was over, to smoke a cigarette. Partway through the card-guessing task, half of the participants were informed that there had been a mistake, and they would be allowed to smoke during a 50-minute break that would occur in another 16 minutes.
However, when the time came for the cigarette break, the participant was told that for every 5 minutes he or she did not smoke, he or she would receive $1 — with the potential to earn up to $10.
Wilson and his colleagues reported in a recent issue of Cognitive, Affective and Behavioral Neuroscience that they found that smokers who could not resist the temptation to smoke also showed weaker responses in the ventral striatum when offered monetary rewards while in the fMRI.
"Our results suggest that it may be possible to identify individuals prospectively by measuring how their brains respond to rewards, an observation that has significant conceptual and clinical implications," said Wilson. "For example, particularly ‘at-risk’ smokers could potentially be identified prior to a quit attempt and be provided with special interventions designed to increase their chances for success."

Neural reward response may demonstrate why quitting smoking is harder for some

For some cigarette smokers, strategies to aid quitting work well, while for many others no method seems to work. Researchers have now identified an aspect of brain activity that helps to predict the effectiveness of a reward-based strategy as motivation to quit smoking.

The researchers observed the brains of nicotine-deprived smokers with functional magnetic resonance imaging (fMRI) and found that those who exhibited the weakest response to rewards were also the least willing to refrain from smoking, even when offered money to do so.

"We believe that our findings may help to explain why some smokers find it so difficult to quit smoking," said Stephen J. Wilson, assistant professor of psychology, Penn State. "Namely, potential sources of reinforcement for giving up smoking — for example, the prospect of saving money or improving health — may hold less value for some individuals and, accordingly, have less impact on their behavior."

The researchers recruited 44 smokers to examine striatal response to monetary reward in those expecting to smoke and in those who were not, and the subsequent willingness of the smokers to forego a cigarette in an effort to earn more money.

"The striatum is part of the so-called reward system in the brain," said Wilson. "It is the area of the brain that is important for motivation and goal-directed behavior — functions highly relevant to addiction."

The participants, who were between the ages of 18 and 45, all reported that they smoked at least 10 cigarettes per day for the past 12 months. They were instructed to abstain from smoking and from using any products containing nicotine for 12 hours prior to arriving for the experiment.

Each participant spent time in an fMRI scanner while playing a card-guessing game with the potential to win money. The participants were informed that they would have to wait approximately two hours, until the experiment was over, to smoke a cigarette. Partway through the card-guessing task, half of the participants were informed that there had been a mistake, and they would be allowed to smoke during a 50-minute break that would occur in another 16 minutes.

However, when the time came for the cigarette break, the participant was told that for every 5 minutes he or she did not smoke, he or she would receive $1 — with the potential to earn up to $10.

Wilson and his colleagues reported in a recent issue of Cognitive, Affective and Behavioral Neuroscience that they found that smokers who could not resist the temptation to smoke also showed weaker responses in the ventral striatum when offered monetary rewards while in the fMRI.

"Our results suggest that it may be possible to identify individuals prospectively by measuring how their brains respond to rewards, an observation that has significant conceptual and clinical implications," said Wilson. "For example, particularly ‘at-risk’ smokers could potentially be identified prior to a quit attempt and be provided with special interventions designed to increase their chances for success."

Filed under smoking neuroimaging striatum individual differences reward system psychology neuroscience science

233 notes

From contemporary syntax to human language’s deep origins



On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.
Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.
From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.
But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?
“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”
Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.
“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”
Introducing the ‘integration hypothesis’
The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.
The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.
The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.
Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.
But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.
However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.
The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”
An inherent capacity
To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.
As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.
“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”
Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.
“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”
Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”
Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.
“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

From contemporary syntax to human language’s deep origins

On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.

Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.

From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.

But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?

“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”

Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.

“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”

Introducing the ‘integration hypothesis’

The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.

The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.

The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.

Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.

But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.

However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.

The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”

An inherent capacity

To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.

As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.

“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”

Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.

“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”

Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”

Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.

“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

Filed under language birdsong evolution linguistics psychology neuroscience science

209 notes

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition
Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.
Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”
Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.
Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.
“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”
“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”
Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Real or Fake? Research Shows Brain Uses Multiple Clues for Facial Recognition

Faces fascinate. Babies love them. We look for familiar or friendly ones in a crowd. And video game developers and movie animators strive to create faces that look real rather than fake. Determining how our brains decide what makes a face “human” and not artificial is a question Dr. Benjamin Balas of North Dakota State University, Fargo, and of the Center for Visual and Cognitive Neuroscience, studies in his lab. New research by Balas and NDSU graduate Christopher Tonsager, published online in the London-based journal Perception, shows that it takes more than eyes to make a face look human.

Researchers study the brain to learn how its specialized circuits process information in seconds to distinguish whether faces are real or fake. Balas and Tonsager note that people interact with artificial faces and characters in video games, watch them in movies, and see artificial faces used more widely as social agents in other settings. “Whether or not a face looks real determines a lot of things,” said Balas, assistant professor of psychology. “Can it have emotions? Can it have plans and ideas? We wanted to know what information you use to decide if a face is real or artificial, since that first step determines a number of judgments that follow.”

Results of the study show that people combine information across many parts of the face to make decisions about how “alive” it is, and that the appearances of these regions interact with each other. Previous research suggests that eyes are especially important for facial recognition. The NDSU study found, however, that when you’re deciding if a face is real or artificial, the eyes and the skin both matter to about the same degree.

Balas and Tonsager, as an undergraduate researcher in psychology, recruited 45 study participants who were evaluated while viewing altered facial images. Tonsager cropped images of real faces so only the face and neck showed, without any hair. A program known as FaceGen Modeller was used to transform the images into 3D computer-generated models of faces. Photos were then computer manipulated into negative images. In two experiments, transformations to real and artificial faces were used to determine if contrast negation affected the ability to determine if a face was real or artificial, and whether the eyes make a disproportionate contribution to animacy discrimination relative to the rest of the face.

“We assumed that the eyes were the key in distinguishing real vs. computer generated, but to our surprise, the results were not significant enough for us to conclude this,” said Tonsager. “However, we did find that when the skin tone is negated, it was more difficult for our participants to determine if it was a real or artificial face. The research leads us to conclude that the entire ‘eye region’ might play a substantial role in the distinction between real or artificial.”

“Beyond telling us more about the distinction your brain makes between a face and a non-face, our results are also relevant to anybody who wants to develop life-like computer graphics,” explained Balas. “Developing artificial faces that look real is a growing industry, and we know that artificial faces that aren’t quite right can look downright creepy. Our work, both in the current paper and ongoing studies in the lab, has the potential to inform how designers create new and better artificial faces for a range of applications.”

Balas and Tonsager also presented their research findings at the Vision Sciences Society 13th Annual Meeting, May 16-21 in St. Peterburg, Florida. http://www.visionsciences.org/meeting.html

Filed under facial recognition artificial face face perception visual perception psychology neuroscience science

free counters