Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

405 notes

Our brains judge a face’s trustworthiness - Even when we can’t see it
Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.
“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.
“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.
The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.
The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.
To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.
These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.
Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.
In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.
Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.
“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Our brains judge a face’s trustworthiness - Even when we can’t see it

Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.

“Our findings suggest that the brain automatically responds to a face’s trustworthiness before it is even consciously perceived,” explains Jonathan Freeman, an assistant professor in New York University’s Department of Psychology and the study’s senior author.

“The results are consistent with an extensive body of research suggesting that we form spontaneous judgments of other people that can be largely outside awareness,” adds Freeman, who conducted the study as a faculty member at Dartmouth College.

The study’s other authors included Ryan Stolier, an NYU doctoral candidate, Zachary Ingbretsen, a research scientist who previously worked with Freeman and is now at Harvard University, and Eric Hehman, a post-doctoral researcher at NYU.

The researchers focused on the workings of the brain’s amygdala, a structure that is important for humans’ social and emotional behavior. Previous studies have shown this structure to be active in judging the trustworthiness of faces. However, it had not been known if the amygdala is capable of responding to a complex social signal like a face’s trustworthiness without that signal reaching perceptual awareness.

To gauge this part of the brain’s role in making such assessments, the study’s authors conducted a pair of experiments in which they monitored the activity of subjects’ amygdala while the subjects were exposed to a series of facial images.

These images included both standardized photographs of actual strangers’ faces as well as artificially generated faces whose trustworthiness cues could be manipulated while all other facial cues were controlled. The artificially generated faces were computer synthesized based on previous research showing that cues such as higher inner eyebrows and pronounced cheekbones are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.

Prior to the start of these experiments, a separate group of subjects examined all the real and computer-generated faces and rated how trustworthy or untrustworthy they appeared. As previous studies have shown, subjects strongly agreed on the level of trustworthiness conveyed by each given face.

In the experiments, a new set of subjects viewed these same faces inside a brain scanner, but were exposed to the faces very briefly—for only a matter of milliseconds. This rapid exposure, together with another feature known as “backward masking,” prevented subjects from consciously seeing the faces. Backward masking works by presenting subjects with an irrelevant “mask” image that immediately follows an extremely brief exposure to a face, which is thought to terminate the brain’s ability to further process the face and prevent it from reaching awareness. In the first experiment, the researchers examined amygdala activity in response to three levels of a face’s trustworthiness: low, medium, and high. In the second experiment, they assessed amygdala activity in response to a fully continuous spectrum of trustworthiness.

Across the two experiments, the researchers found that specific regions inside the amygdala exhibited activity tracking how untrustworthy a face appeared, and other regions inside the amygdala exhibited activity tracking the overall strength of the trustworthiness signal (whether untrustworthy or trustworthy)—even though subjects could not consciously see any of the faces.

“These findings provide evidence that the amygdala’s processing of social cues in the absence of awareness may be more extensive than previously understood,” observes Freeman. “The amygdala is able to assess how trustworthy another person’s face appears without it being consciously perceived.”

Filed under amygdala trustworthiness face perception brain activity psychology neuroscience science

95 notes

Declining intelligence in old age linked to visual processing

Researchers have uncovered one of the basic processes that may help to explain why some people’s thinking skills decline in old age. Age-related declines in intelligence are strongly related to declines on a very simple task of visual perception speed, the researchers report in the Cell Press journal Current Biology on August 4.

The evidence comes from experiments in which researchers showed 600 healthy older people very brief flashes of one of two shapes on a screen and measured the time it took each of them to reliably tell one from the other. Participants repeated the test at ages 70, 73, and 76. The longitudinal study is among the first to test the hypothesis that the changes they observed in the measure known as “inspection time” might be related to changes in intelligence in old age.

"The results suggest that the brain’s ability to make correct decisions based on brief visual impressions limits the efficiency of more complex mental functions," says Stuart Ritchie of the University of Edinburgh. "As this basic ability declines with age, so too does intelligence. The typical person who has better-preserved complex thinking skills in older age tends to be someone who can accumulate information quickly from a fleeting glance."

Previous studies had shown that smarter people, as measured by standard IQ tests, tend to be better at discerning the difference between two briefly presented shapes, the researchers explain. But before now no one had looked to see how those two measures might change over time as people grow older. The findings were rather unexpected.

"What surprised us was the strength of the relation between the declines," Ritchie says. "Because inspection time and the intelligence tests are so very different from one another, we wouldn’t have expected their declines to be so strongly connected."

The results provide evidence that the slowing of simple, visual decision-making processes might be part of what underlies declines in the complex decision making that we recognize as general intelligence. The results might also find practical use given the simplicity of the inspection time measure, Ritchie says, noting that the test can be taken very simply on a computer and has been used with children, adults, and even patients with dementia or other medical disorders.

"Since the declines are so strongly related, it might be easier under some circumstances to use inspection time to chart a participant’s cognitive decline than it would be to sit them down and give them a full, complicated battery of IQ tests," he says.

(Source: eurekalert.org)

Filed under visual perception intelligence thinking aging cognition psychology neuroscience science

103 notes

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)
Complexity of eye-hand coordination
People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.
"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."
The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.
"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."
A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.
"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."
Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”
He often describes the behavior of high level ping pong players to explain how it works.
"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.
"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.
"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."
Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.
Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.
To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.
"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."
As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.
"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."
School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)

Complexity of eye-hand coordination

People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.

"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."

The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.

"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."

A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.

"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."

Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”

He often describes the behavior of high level ping pong players to explain how it works.

"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.

"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.

"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."

Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.

Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.

To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.

"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."

As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.

"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."

School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

Filed under eye-hand coordination eye movements parietal cortex prosthetics neural activity psychology neuroscience science

218 notes

A little video gaming ‘produces well-adjusted children’
Playing video games for a short period each day could have a small but positive impact on child development, a study by Oxford University suggests.
Scientists found young people who spent less than an hour a day engaged in video games were better adjusted than those who did not play at all.
But children who used consoles for more than three hours reported lower satisfaction with their lives overall.
The research is published in the journal Pediatrics.
Read more

A little video gaming ‘produces well-adjusted children’

Playing video games for a short period each day could have a small but positive impact on child development, a study by Oxford University suggests.

Scientists found young people who spent less than an hour a day engaged in video games were better adjusted than those who did not play at all.

But children who used consoles for more than three hours reported lower satisfaction with their lives overall.

The research is published in the journal Pediatrics.

Read more

Filed under video games children psychosocial adjustment social interaction psychology neuroscience science

189 notes

(Image caption: Brain image showing activity in the amygdala, the area of the brain involved with emotion. The amydgala was more active during the graphic scenarios only when the harm being described was intentional. Credit: Marois Lab / Vanderbilt)
Fault trumps gruesome evidence when it comes to meting out punishment
Issues of crime and punishment, vengeance and justice date back to the dawn of human history, but it is only in the last few years that scientists have begun exploring the basic nature of the complex neural processes in the brain that underlie these fundamental behaviors.
Now a new brain imaging study – published online Aug. 3 by the journal Nature Neuroscience – has identified the brain mechanisms that underlie our judgment of how severely a person who has harmed another should be punished. Specifically, the study determined how the area of the brain that determines whether such an act was intentional or unintentional trumps the emotional urge to punish the person, however gruesome the harm may be.
“A fundamental aspect of the human experience is the desire to punish harmful acts, even when the victim is a perfect stranger. Equally important, however, is our ability to put the brakes on this impulse when we realize the harm was done unintentionally,” said Rene Marois, the Vanderbilt University professor of psychology who headed the research team. “This study helps us begin to elucidate the neural circuitry that permits this type of regulation.”
The study
In the experiment, the brains of 30 volunteers (20 male, 10 female, average age 23 years) were imaged using functional MRI (fMRI) while they read a series of brief scenarios that described how the actions of a protagonist named John brought harm to either Steve or Mary. The scenarios depicted four different levels of harm: death, maiming, physical assault and property damage. In half of them, the harm was clearly identified as intentional and in half it was clearly identified as unintentional.
Two versions of each scenario were created: one with a factual description of the harm and the other with a graphic description. For example, in a mountain climbing scenario where John cuts Steve’s rope, the factual version states, “Steve falls 100 feet to the ground below. Steve experiences significant bodily harm from the fall and he dies from his injuries shortly after impact.” And the graphic version reads, “Steve plummets to the rocks below. Nearly every bone in his body is broken upon impact. Steve’s screams are muffled by thick, foamy blood flowing from his mouth as he bleeds to death.”
After reading each scenario the participants were asked to list how much punishment John deserved on a scale from zero (no punishment) to nine (most severe punishment the subject endorsed).
Analysis of the responses
When the responses were analyzed, the researchers found that the manner in which the harmful consequences of an action are described significantly influences the level of punishment that people consider appropriate: When the harm was described in a graphic or lurid fashion then people set the punishment level higher than when it was described matter-of-factly. However, this higher punishment level only applied when the participants considered the resulting harm to be intentional. When they considered it to be unintentional, the way it was described didn’t have any effect.
“What we’ve shown is that manipulations of gruesome language leads to harsher punishment, but only in cases where the harm was intentional. Language had no effect when the harm was caused unintentionally,” summarized Michael Treadway, a post-doctoral fellow at Harvard Medical School and lead author of the study.
According to the researchers, the fact that the mere presence of graphic language could cause participants to ratchet up the severity of the punishments suggests that photographs, video and other graphic materials sampled from a crime scene is likely to have an even stronger impact on an individual’s desire to punish.
“Although the underlying scientific basis of this effect wasn’t known until now, the legal system recognized it a long time ago and made provisions to counteract it,” said Treadway. “Judges are permitted to exclude relevant evidence from a trial if they decide that its probative value is substantially outweighed by its prejudicial nature.”
Underlying neuroanatomy
The fMRI scans revealed the areas of the brain that are involved in this complex process. They found that the amygdala, an almond-shaped set of neurons that plays a key role in processing emotions, responded most strongly to the graphic language condition. Like the punishment ratings themselves, however, this effect in the amygdala was only present when harm was done intentionally. Moreover, in this situation the researchers found that the amygdala showed stronger communication with the dorsolateral prefrontal cortex (dlPFC), an area that is critical for punishment decision-making. When the harm was done unintentionally, however, a different regulatory network – one involved in decoding the mental states of other people – became more active and appeared to suppress amygdala responses to the graphic language, thereby preventing the amygdala from affecting decision-making areas in dlPFC.
“This is basically a reassuring finding,” said Marois. “It indicates that, when the harm is not intended, we don’t simply shunt aside the emotional impulse to punish. Instead, it appears that the brain down-regulates the impulse so we don’t feel it as strongly. That is preferable because the urge to punish is less likely to resurface at a future date.”

(Image caption: Brain image showing activity in the amygdala, the area of the brain involved with emotion. The amydgala was more active during the graphic scenarios only when the harm being described was intentional. Credit: Marois Lab / Vanderbilt)

Fault trumps gruesome evidence when it comes to meting out punishment

Issues of crime and punishment, vengeance and justice date back to the dawn of human history, but it is only in the last few years that scientists have begun exploring the basic nature of the complex neural processes in the brain that underlie these fundamental behaviors.

Now a new brain imaging study – published online Aug. 3 by the journal Nature Neurosciencehas identified the brain mechanisms that underlie our judgment of how severely a person who has harmed another should be punished. Specifically, the study determined how the area of the brain that determines whether such an act was intentional or unintentional trumps the emotional urge to punish the person, however gruesome the harm may be.

A fundamental aspect of the human experience is the desire to punish harmful acts, even when the victim is a perfect stranger. Equally important, however, is our ability to put the brakes on this impulse when we realize the harm was done unintentionally,” said Rene Marois, the Vanderbilt University professor of psychology who headed the research team. “This study helps us begin to elucidate the neural circuitry that permits this type of regulation.”

The study

In the experiment, the brains of 30 volunteers (20 male, 10 female, average age 23 years) were imaged using functional MRI (fMRI) while they read a series of brief scenarios that described how the actions of a protagonist named John brought harm to either Steve or Mary. The scenarios depicted four different levels of harm: death, maiming, physical assault and property damage. In half of them, the harm was clearly identified as intentional and in half it was clearly identified as unintentional.

Two versions of each scenario were created: one with a factual description of the harm and the other with a graphic description. For example, in a mountain climbing scenario where John cuts Steve’s rope, the factual version states, “Steve falls 100 feet to the ground below. Steve experiences significant bodily harm from the fall and he dies from his injuries shortly after impact.” And the graphic version reads, “Steve plummets to the rocks below. Nearly every bone in his body is broken upon impact. Steve’s screams are muffled by thick, foamy blood flowing from his mouth as he bleeds to death.”

After reading each scenario the participants were asked to list how much punishment John deserved on a scale from zero (no punishment) to nine (most severe punishment the subject endorsed).

Analysis of the responses

When the responses were analyzed, the researchers found that the manner in which the harmful consequences of an action are described significantly influences the level of punishment that people consider appropriate: When the harm was described in a graphic or lurid fashion then people set the punishment level higher than when it was described matter-of-factly. However, this higher punishment level only applied when the participants considered the resulting harm to be intentional. When they considered it to be unintentional, the way it was described didn’t have any effect.

What we’ve shown is that manipulations of gruesome language leads to harsher punishment, but only in cases where the harm was intentional. Language had no effect when the harm was caused unintentionally,” summarized Michael Treadway, a post-doctoral fellow at Harvard Medical School and lead author of the study.

According to the researchers, the fact that the mere presence of graphic language could cause participants to ratchet up the severity of the punishments suggests that photographs, video and other graphic materials sampled from a crime scene is likely to have an even stronger impact on an individual’s desire to punish.

“Although the underlying scientific basis of this effect wasn’t known until now, the legal system recognized it a long time ago and made provisions to counteract it,” said Treadway. “Judges are permitted to exclude relevant evidence from a trial if they decide that its probative value is substantially outweighed by its prejudicial nature.”

Underlying neuroanatomy

The fMRI scans revealed the areas of the brain that are involved in this complex process. They found that the amygdala, an almond-shaped set of neurons that plays a key role in processing emotions, responded most strongly to the graphic language condition. Like the punishment ratings themselves, however, this effect in the amygdala was only present when harm was done intentionally. Moreover, in this situation the researchers found that the amygdala showed stronger communication with the dorsolateral prefrontal cortex (dlPFC), an area that is critical for punishment decision-making. When the harm was done unintentionally, however, a different regulatory network – one involved in decoding the mental states of other people – became more active and appeared to suppress amygdala responses to the graphic language, thereby preventing the amygdala from affecting decision-making areas in dlPFC.

“This is basically a reassuring finding,” said Marois. “It indicates that, when the harm is not intended, we don’t simply shunt aside the emotional impulse to punish. Instead, it appears that the brain down-regulates the impulse so we don’t feel it as strongly. That is preferable because the urge to punish is less likely to resurface at a future date.”

Filed under brain imaging amygdala prefrontal cortex punishment psychology neuroscience science

679 notes

Do we really only use 10% of our brain?

As the new film Lucy, starring Scarlett Johansson and Morgan Freeman is set to be released in the cinemas this week, I feel I should attempt to dispel the unfounded premise of the film – that we only use 10% of our brains. Let me state that there is no scientific evidence that supports this statement, it is simply a myth.
The concept behind the film is that through the administration of a new cognitive enhancing drug, our female lead character, Lucy, becomes able to harness powerful mental capabilities and enhanced physical abilities. These include telekinesis, mental time travel and being able to absorb information instantaneously. Viewed as such, the human brain should be essentially capable of these feats, we just fail to push our capacity. So if we can unlock the “unused” 90% of the brain we too could be geniuses with super powers?

Read more

Do we really only use 10% of our brain?

As the new film Lucy, starring Scarlett Johansson and Morgan Freeman is set to be released in the cinemas this week, I feel I should attempt to dispel the unfounded premise of the film – that we only use 10% of our brains. Let me state that there is no scientific evidence that supports this statement, it is simply a myth.

The concept behind the film is that through the administration of a new cognitive enhancing drug, our female lead character, Lucy, becomes able to harness powerful mental capabilities and enhanced physical abilities. These include telekinesis, mental time travel and being able to absorb information instantaneously. Viewed as such, the human brain should be essentially capable of these feats, we just fail to push our capacity. So if we can unlock the “unused” 90% of the brain we too could be geniuses with super powers?

Read more

Filed under 10% of brain brain function Lucy psychology neuroscience science

122 notes

Not too early for maths
Bad maths grades, poor participation in class, no interest in arithmetic. Preterm children often suffer from dyscalculia – at least according to some scientific studies. A misunderstanding, claims developmental psychologist Dr Julia Jäkel, who has been studying the performance of preterm children.
Thanks to modern medicine, the percentage of preterm survivors is constantly increasing. On the cognitive level, these children frequently have long-term problems such as poor arithmetic skills and difficulty concentrating. For a long time, research focused on high-risk children, born before 32 weeks gestational age or with less than 1,500 gram. Current studies from the most recent years, however, show that this approach is too short-sighted.
Dr Julia Jäkel from the Department of Developmental Psychology has analysed cognitive abilities of children born between 23 and 41 weeks gestation. In doing so, she covered the entire spectrum, ranging from extremely preterm to healthy term born infants. For this purpose, she used data of the Bavarian Longitudinal Study, which has been following a birth cohort from the late 80s until today. “Having access to such a comprehensive long-term study is a dream come true for every developmental psychologist,” says the Bochum researcher. Over the course of the study, all children underwent a whole battery of tests that assessed their cognitive and educational abilities, and their parents were interviewed in depth.
The RUB researcher has so far mainly focused on data collected at preschool and early school age. For different test tasks, she assessed their cognitive workload, a criterion for the complexity of a given task. The data showed that preterm children had greater difficulties with tasks that demanded higher working memory resources. Moreover, results revealed that not only high-risk children had significant difficulties. On average, the more preterm a child had been born, the poorer were his or her abilities to solve complex tasks.
But what exactly is the nature of these difficulties? It has been frequently suggested that preterm children suffer from dyscalculia. A phenomenon that Julia Jäkel examined more closely. “Mathematical deficiencies, maths learning disorder, dyscalculia, innumeracy – these terms’ definitions vary slightly,” she explains, but there are no standardised, internationally consistent diagnostic criteria. In order to assess specific maths deficiencies, children in Germany are assessed with a number of tests. If their results fall below a certain cut off value in maths while their cognitive skills (IQ) are in the normal range, they are diagnosed with “maths learning disorder” or “dyscalculia”.
“The problem with preterm children, however, is that they often have general cognitive deficits,” Julia Jäkel points out. “According to current criteria, these children can’t be diagnosed.” Together with Dieter Wolke from the University of Warwick, UK, she compared different diagnostic criteria for dyscalculia in her analysis. The aim of the study was to identify specific maths deficiencies in preterm children that were independent of general cognitive impairments. With surprising results: “There is no specific maths deficit in preterm children if their general IQ is factored in,” says the researcher.
This means that preterm children do not suffer from dyscalculia more often than term children. However, they often have maths difficulties and these may not be recognized. This is because the current criteria make it impossible to diagnose dyscalculia if a child also has general cognitive deficits. Thus, these children do not receive specific help in maths although they may be in urgent need. “We need reliable and consistent diagnostic criteria,” demands Julia Jäkel. “And we’ve got to find ways to actually deliver support in schools.”
Together with her British team, the psychologist compared the results of the Bavarian Longitudinal Study with “EPICure” data, a similar study that commenced in the UK in the 1990s, following a cohort of extremely preterm children. The researchers focus on mathematical and educational performance. British preterm children had similar cognitive and basic numerical skills as German preterm children. In terms of maths achievement, however, they showed significantly better results. “We explain this with the fact that, unlike in Germany, in the UK it has not been possible for children to delay school entry,” explains Julia Jäkel. “In addition, special schools are attended by only a small percentage of extremely disabled children. All other children are integrated into normal classes in regular schools and receive targeted support there.”
The developmental psychologist has already demonstrated that assistance at primary-school age can really make a difference. Parents who support their preterm children with sensitive scaffolding can compensate the negative cognitive effects of preterm birth. It is helpful, for example, if parents give their children appropriate feedback to homework tasks and suggest potential solutions, rather than solving the tasks for the child. However, Julia Jäkel believes that a lot of research is yet to be done as far as intervention is concerned: “A large percentage of parents is very dedicated and has resources to help their children,” she says. “But research has not yet produced anything that would ensure successful results in the long-term.” Together with colleagues from the university hospital in Essen, the RUB researcher plans to investigate the benefits of computer-aided working memory training for preterm children’s school success, which has already been successfully applied on an international level.
It would also be helpful if findings from related disciplines, such as developmental psychology, educational research, and neonatal medicine were better integrated. This is, for example, because neonatal medical treatment can significantly affect later cognitive performance. Together with her interdisciplinary team, Julia Jäkel used a comprehensive model to analyse to what extent different neonatal medical indicators affect cognitive development at age 20 months, attention abilities at age six, and maths abilities at age eight years. In her analyses, she factored in child sex and socio-economic status.
Results showed that neonatal medical variables, e.g., the duration of mechanical ventilation, predicted cognitive abilities at age 20 months. Both factors together predicted attention regulation at age six years. And all those precursors, in turn, affected long-term general maths abilities.
Subsequently, Julia Jäkel analysed the data once again from a different perspective, in order to predict specific maths skills that were independent of the child’s IQ. In that model, only two variables had direct impact: the duration of mechanical ventilation and hospitalisation after birth. In the 1980s, when children participating in the Bavarian Longitudinal Study were born, German doctors often used invasive ventilation methods. Today, less invasive methods are available, but to what extent they may affect long-term cognitive performance has not yet been investigated.
“Both too high and too low oxygen concentrations are harmful to brain development,” explains Julia Jäkel. “The neonatologist in charge is faced with the great challenge of determining the right dose for each infant, depending on individually changing situations.” This is why it is so important to integrate psychological models with neonatal intensive care research. The joint objective is to offer preterm children the chance of a successful school career, high quality of life and social participation.

Not too early for maths

Bad maths grades, poor participation in class, no interest in arithmetic. Preterm children often suffer from dyscalculia – at least according to some scientific studies. A misunderstanding, claims developmental psychologist Dr Julia Jäkel, who has been studying the performance of preterm children.

Thanks to modern medicine, the percentage of preterm survivors is constantly increasing. On the cognitive level, these children frequently have long-term problems such as poor arithmetic skills and difficulty concentrating. For a long time, research focused on high-risk children, born before 32 weeks gestational age or with less than 1,500 gram. Current studies from the most recent years, however, show that this approach is too short-sighted.

Dr Julia Jäkel from the Department of Developmental Psychology has analysed cognitive abilities of children born between 23 and 41 weeks gestation. In doing so, she covered the entire spectrum, ranging from extremely preterm to healthy term born infants. For this purpose, she used data of the Bavarian Longitudinal Study, which has been following a birth cohort from the late 80s until today. “Having access to such a comprehensive long-term study is a dream come true for every developmental psychologist,” says the Bochum researcher. Over the course of the study, all children underwent a whole battery of tests that assessed their cognitive and educational abilities, and their parents were interviewed in depth.

The RUB researcher has so far mainly focused on data collected at preschool and early school age. For different test tasks, she assessed their cognitive workload, a criterion for the complexity of a given task. The data showed that preterm children had greater difficulties with tasks that demanded higher working memory resources. Moreover, results revealed that not only high-risk children had significant difficulties. On average, the more preterm a child had been born, the poorer were his or her abilities to solve complex tasks.

But what exactly is the nature of these difficulties? It has been frequently suggested that preterm children suffer from dyscalculia. A phenomenon that Julia Jäkel examined more closely. “Mathematical deficiencies, maths learning disorder, dyscalculia, innumeracy – these terms’ definitions vary slightly,” she explains, but there are no standardised, internationally consistent diagnostic criteria. In order to assess specific maths deficiencies, children in Germany are assessed with a number of tests. If their results fall below a certain cut off value in maths while their cognitive skills (IQ) are in the normal range, they are diagnosed with “maths learning disorder” or “dyscalculia”.

“The problem with preterm children, however, is that they often have general cognitive deficits,” Julia Jäkel points out. “According to current criteria, these children can’t be diagnosed.” Together with Dieter Wolke from the University of Warwick, UK, she compared different diagnostic criteria for dyscalculia in her analysis. The aim of the study was to identify specific maths deficiencies in preterm children that were independent of general cognitive impairments. With surprising results: “There is no specific maths deficit in preterm children if their general IQ is factored in,” says the researcher.

This means that preterm children do not suffer from dyscalculia more often than term children. However, they often have maths difficulties and these may not be recognized. This is because the current criteria make it impossible to diagnose dyscalculia if a child also has general cognitive deficits. Thus, these children do not receive specific help in maths although they may be in urgent need. “We need reliable and consistent diagnostic criteria,” demands Julia Jäkel. “And we’ve got to find ways to actually deliver support in schools.”

Together with her British team, the psychologist compared the results of the Bavarian Longitudinal Study with “EPICure” data, a similar study that commenced in the UK in the 1990s, following a cohort of extremely preterm children. The researchers focus on mathematical and educational performance. British preterm children had similar cognitive and basic numerical skills as German preterm children. In terms of maths achievement, however, they showed significantly better results. “We explain this with the fact that, unlike in Germany, in the UK it has not been possible for children to delay school entry,” explains Julia Jäkel. “In addition, special schools are attended by only a small percentage of extremely disabled children. All other children are integrated into normal classes in regular schools and receive targeted support there.”

The developmental psychologist has already demonstrated that assistance at primary-school age can really make a difference. Parents who support their preterm children with sensitive scaffolding can compensate the negative cognitive effects of preterm birth. It is helpful, for example, if parents give their children appropriate feedback to homework tasks and suggest potential solutions, rather than solving the tasks for the child. However, Julia Jäkel believes that a lot of research is yet to be done as far as intervention is concerned: “A large percentage of parents is very dedicated and has resources to help their children,” she says. “But research has not yet produced anything that would ensure successful results in the long-term.” Together with colleagues from the university hospital in Essen, the RUB researcher plans to investigate the benefits of computer-aided working memory training for preterm children’s school success, which has already been successfully applied on an international level.

It would also be helpful if findings from related disciplines, such as developmental psychology, educational research, and neonatal medicine were better integrated. This is, for example, because neonatal medical treatment can significantly affect later cognitive performance. Together with her interdisciplinary team, Julia Jäkel used a comprehensive model to analyse to what extent different neonatal medical indicators affect cognitive development at age 20 months, attention abilities at age six, and maths abilities at age eight years. In her analyses, she factored in child sex and socio-economic status.

Results showed that neonatal medical variables, e.g., the duration of mechanical ventilation, predicted cognitive abilities at age 20 months. Both factors together predicted attention regulation at age six years. And all those precursors, in turn, affected long-term general maths abilities.

Subsequently, Julia Jäkel analysed the data once again from a different perspective, in order to predict specific maths skills that were independent of the child’s IQ. In that model, only two variables had direct impact: the duration of mechanical ventilation and hospitalisation after birth. In the 1980s, when children participating in the Bavarian Longitudinal Study were born, German doctors often used invasive ventilation methods. Today, less invasive methods are available, but to what extent they may affect long-term cognitive performance has not yet been investigated.

“Both too high and too low oxygen concentrations are harmful to brain development,” explains Julia Jäkel. “The neonatologist in charge is faced with the great challenge of determining the right dose for each infant, depending on individually changing situations.” This is why it is so important to integrate psychological models with neonatal intensive care research. The joint objective is to offer preterm children the chance of a successful school career, high quality of life and social participation.

Filed under dyscalculia mathematics cognitive development brain development children psychology neuroscience science

130 notes

Experiences at every stage of life contribute to cognitive abilities in old age

Early life experiences, such as childhood socioeconomic status and literacy, may have greater influence on the risk of cognitive impairment late in life than such demographic characteristics as race and ethnicity, a large study by researchers with the UC Davis Alzheimer’s Disease Center and the University of Victoria, Canada, has found.

image

“Declining cognitive function in older adults is a major personal and public health concern,” said Bruce Reed professor of neurology and associate director of the UC Davis Alzheimer’s Disease Center.

“But not all people lose cognitive function, and understanding the remarkable variability in cognitive trajectories as people age is of critical importance for prevention, treatment and planning to promote successful cognitive aging and minimize problems associated with cognitive decline.”

The study, “Life Experiences and Demographic Influences on Cognitive Function in Older Adults,” is published online in Neuropsychology, a journal of the American Psychological Association. It is one of the first comprehensive examinations of the multiple influences of varied demographic factors early in life and their relationship to cognitive aging.

The research was conducted in a group of over 300 diverse men and women who spoke either English or Spanish. They were recruited from senior citizen social, recreational and residential centers, as well as churches and health-care settings. At the time of recruitment, all study participants were 60 or older, and had no major psychiatric illnesses or life threatening medical illnesses. Participants were Caucasian, African-American or Hispanic.

The extensive testing included multidisciplinary diagnostic evaluations through the UC Davis Alzheimer’s Disease Center in either English or Spanish, which permitted comparisons across a diverse cohort of participants.

Consistent with previous research, the study found that non-Latino Caucasians scored 20 to 25 percent higher on tests of semantic memory (general knowledge) and 13 to 15 percent higher on tests of executive functioning compared to the other ethnic groups. However, ethnic differences in executive functioning disappeared and differences in semantic memory were reduced by 20 to 30 percent when group differences in childhood socioeconomic status, adult literacy and extent of physical activity during adulthood were considered. 

“This study is unusual in that it examines how many different life experiences affect cognitive decline in late life,” said Dan Mungas, professor of neurology and associate director of the UC Davis Alzheimer’s Disease Research Center. 

“It shows that variables like ethnicity and years of education that influence cognitive test scores in a single evaluation are not associated with rate of cognitive decline, but that specific life experiences like level of reading attainment and intellectually stimulating activities are predictive of the rate of late-life cognitive decline. This suggests that intellectual stimulation throughout the life span can reduce cognitive decline in old age.”

Regardless of ethnicity, advanced age and apolipoprotein-E (APOE genotype) were associated with increased cognitive decline over an average of four years that participants were followed. APOE is the largest known genetic risk factor for late-onset Alzheimer’s. Less decline was experienced by persons who reported more engagement in recreational activities in late life and who maintained their levels of activity engagement from middle age to old age. Single-word reading — the ability to decode a word on sight, which often is considered an indication of quality of educational experience — was also associated with less cognitive decline, a finding that was true for both English and Spanish readers, irrespective of their race or ethnicity. These findings suggest that early life experiences affect late-life cognition indirectly, through literacy and late-life recreational pursuits, the authors said.

“These findings are important,” explained Paul Brewster, lead author of the study, a doctoral student at the University of Victoria, Canada, and a pre-doctoral psychology intern at the UC San Diego Department of Psychiatry, “because it challenges earlier research that suggests associations between race and ethnicity, particularly among Latinos, and an increased risk of late-life cognitive impairment and dementia.

”Our findings suggest that the influences of demographic factors on late-life cognition may be reflective of broader socioeconomic factors, such as educational opportunity and related differences in physical and mental activity across the life span.”

(Source: ucdmc.ucdavis.edu)

Filed under alzheimer's disease cognitive impairment life experience apoE4 psychology neuroscience science

313 notes

Missing sleep may hurt your memory
Lack of sleep, already considered a public health epidemic, can also lead to errors in memory, finds a new study by researchers at Michigan State University and the University of California, Irvine.
The study, published online in the journal Psychological Science, found participants deprived of a night’s sleep were more likely to flub the details of a simulated burglary they were shown in a series of images.
Distorted memory can have serious consequences in areas such as criminal justice, where eyewitness misidentifications are thought to be the leading cause of wrongful convictions in the United States.
“We found memory distortion is greater after sleep deprivation,” said Kimberly Fenn, MSU associate professor of psychology and co-investigator on the study. “And people are getting less sleep each night than they ever have.”
The Centers for Disease Control and Prevention calls insufficient sleep an epidemic and said it’s linked to vehicle crashes, industrial disasters and chronic diseases such as hypertension and diabetes.
The researchers conducted experiments at MSU and UC-Irvine to gauge the effect of insufficient sleep on memory. The results: Participants who were kept awake for 24 hours – and even those who got five or fewer hours of sleep – were more likely to mix up event details than participants who were well rested.
“People who repeatedly get low amounts of sleep every night could be more prone in the long run to develop these forms of memory distortion,” Fenn said. “It’s not just a full night of sleep deprivation that puts them at risk.”

Missing sleep may hurt your memory

Lack of sleep, already considered a public health epidemic, can also lead to errors in memory, finds a new study by researchers at Michigan State University and the University of California, Irvine.

The study, published online in the journal Psychological Science, found participants deprived of a night’s sleep were more likely to flub the details of a simulated burglary they were shown in a series of images.

Distorted memory can have serious consequences in areas such as criminal justice, where eyewitness misidentifications are thought to be the leading cause of wrongful convictions in the United States.

“We found memory distortion is greater after sleep deprivation,” said Kimberly Fenn, MSU associate professor of psychology and co-investigator on the study. “And people are getting less sleep each night than they ever have.”

The Centers for Disease Control and Prevention calls insufficient sleep an epidemic and said it’s linked to vehicle crashes, industrial disasters and chronic diseases such as hypertension and diabetes.

The researchers conducted experiments at MSU and UC-Irvine to gauge the effect of insufficient sleep on memory. The results: Participants who were kept awake for 24 hours – and even those who got five or fewer hours of sleep – were more likely to mix up event details than participants who were well rested.

“People who repeatedly get low amounts of sleep every night could be more prone in the long run to develop these forms of memory distortion,” Fenn said. “It’s not just a full night of sleep deprivation that puts them at risk.”

Filed under sleep sleep deprivation memory false memory psychology neuroscience science

246 notes

Children as young as three recognise ‘cuteness’ in faces of people and animals
Children as young as three are able to recognise the same ‘cute’ infantile facial features in humans and animals which encourage caregiving behaviour in adults, new research has shown.
A study investigating whether youngsters can identify baby-like characteristics – a set of traits known as the ‘baby schema’ – across different species has revealed for the first time that even pre-school children rate puppies, kittens and babies as cuter than their adult counterparts.
The discovery that young children are influenced by the baby schema – a round face, high forehead, big eyes and a small nose and mouth – is a significant step towards understanding why humans are more attracted to infantile features, the study authors believe.
The baby schema has been proven to engender protective, care-giving behaviour and a decreased likelihood of aggression toward infants from adults.
The research was carried out by PhD student Marta Borgi and Professor Kerstin Meints, members of the Evolution and Development Research Group in the School of Psychology at the University of Lincoln, UK.
Marta said: “This study is important for several reasons. We already knew that adults experience this baby schema effect, finding babies with more infantile features cuter.
“Our results provide the first rigorous demonstration that a visual preference for these traits emerges very early during development. Independently of the species viewed, children in our study spent more time looking at images with a higher degree of these baby-like features.
“Interestingly, while participants gave different cuteness scores to dogs, cats and humans, they all found the images of adult dog faces cuter than both adult cats and human faces.”
The researchers carried out two experiments with children aged between three and six years old: one to track eye movements to see which facial areas the children were drawn to, and a second to assess how cute the children rated animals and humans with infantile traits.
Pictures of human adults and babies, dogs, puppies, cats and kittens were digitally manipulated to appear ‘cuter’ by applying baby schema characteristics. The same source images were also made less cute by giving the subjects more adult-like features: a narrow face, low forehead, small eyes, and large nose and mouth – making this study more rigorous than previous work.
The children rated how cute they thought each image was and their eye movements were analysed using specialist eye-tracking software developed by the University of Lincoln.
The research could also lead to improved education in teaching children about safe behaviour with dogs.
Professor Kerstin Meints, Professor in Developmental Psychology at Lincoln’s School of Psychology, supervised the research.
She said: “We have also demonstrated that children are highly attracted to dogs and puppies, and we now need to find out if that attractiveness may override children’s ability to recognise stress signalling in dogs.”
“This study will also lead to further research with an impact on real life, namely whether the ‘cuteness’ of an animal in rescue centres makes them more or less likely to be adopted.”
This research was published in the scientific journal Frontiers in Psychology.

Children as young as three recognise ‘cuteness’ in faces of people and animals

Children as young as three are able to recognise the same ‘cute’ infantile facial features in humans and animals which encourage caregiving behaviour in adults, new research has shown.

A study investigating whether youngsters can identify baby-like characteristics – a set of traits known as the ‘baby schema’ – across different species has revealed for the first time that even pre-school children rate puppies, kittens and babies as cuter than their adult counterparts.

The discovery that young children are influenced by the baby schema – a round face, high forehead, big eyes and a small nose and mouth – is a significant step towards understanding why humans are more attracted to infantile features, the study authors believe.

The baby schema has been proven to engender protective, care-giving behaviour and a decreased likelihood of aggression toward infants from adults.

The research was carried out by PhD student Marta Borgi and Professor Kerstin Meints, members of the Evolution and Development Research Group in the School of Psychology at the University of Lincoln, UK.

Marta said: “This study is important for several reasons. We already knew that adults experience this baby schema effect, finding babies with more infantile features cuter.

“Our results provide the first rigorous demonstration that a visual preference for these traits emerges very early during development. Independently of the species viewed, children in our study spent more time looking at images with a higher degree of these baby-like features.

“Interestingly, while participants gave different cuteness scores to dogs, cats and humans, they all found the images of adult dog faces cuter than both adult cats and human faces.”

The researchers carried out two experiments with children aged between three and six years old: one to track eye movements to see which facial areas the children were drawn to, and a second to assess how cute the children rated animals and humans with infantile traits.

Pictures of human adults and babies, dogs, puppies, cats and kittens were digitally manipulated to appear ‘cuter’ by applying baby schema characteristics. The same source images were also made less cute by giving the subjects more adult-like features: a narrow face, low forehead, small eyes, and large nose and mouth – making this study more rigorous than previous work.

The children rated how cute they thought each image was and their eye movements were analysed using specialist eye-tracking software developed by the University of Lincoln.

The research could also lead to improved education in teaching children about safe behaviour with dogs.

Professor Kerstin Meints, Professor in Developmental Psychology at Lincoln’s School of Psychology, supervised the research.

She said: “We have also demonstrated that children are highly attracted to dogs and puppies, and we now need to find out if that attractiveness may override children’s ability to recognise stress signalling in dogs.”

“This study will also lead to further research with an impact on real life, namely whether the ‘cuteness’ of an animal in rescue centres makes them more or less likely to be adopted.”

This research was published in the scientific journal Frontiers in Psychology.

Filed under cuteness perception child development baby schema eye movements psychology neuroscience science

free counters