Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

86 notes

Kelly the Robot Helps Kids Tackle Autism

Using a kid-friendly robot during behavioral therapy sessions may help some children with autism gain better social skills, a preliminary study suggests.

image

The study, of 19 children with autism spectrum disorders (ASDs), found that kids tended to do better when their visit with a therapist included a robot “co-therapist.” On average, they made bigger gains in social skills such as asking “appropriate” questions, answering questions and making conversational comments.

So-called humanoid robots are already being marketed for this purpose, but there has been little research to back it up.

"Going into this study, we were skeptical," said lead researcher Joshua Diehl, an assistant professor of psychology at the University of Notre Dame in Indiana, who said he has no financial interest in the technology.

"We found that, to our surprise, the kids did better when the robot was added," he said.

There are still plenty of caveats, however, said Diehl, who is presenting his team’s findings Saturday at the International Meeting for Autism Research (IMFAR) in San Sebastian, Spain.

For one, the study was small. And it’s not clear that the results seen in a controlled research setting would be the same in the real world of therapists’ offices, according to Diehl.

"I’d say this is not yet ready for prime time," he said.

ASDs are a group of developmental disorders that affect a person’s ability to communicate and interact socially. The severity of those effects range widely: Some people have mild problems socializing, but have normal to above-normal intelligence; some people have profound difficulties relating to others, and may have intellectual impairment as well.

Experts have become interested in using technology — from robots to iPads — along with standard ASD therapies because it may help bridge some of the communication issues kids have.

Human communication is complex and unpredictable, with body language, facial expressions and other subtle cues coming into the mix, explained Geraldine Dawson, chief science officer for the advocacy group Autism Speaks.

A robot or a computer game, on the other hand, can be programmed to be simple and predictable, and that may help kids with ASDs better process the information they are being given, Dawson said.

"Broadly speaking," she said, "we are very excited about the potential role for technology in diagnosing and treating ASDs." But she also agreed with Diehl that the findings are "very preliminary," and that researchers have a lot more to learn about how technology — robots or otherwise — fits into ASD therapies.

For the study, Diehl’s team used a humanoid robot manufactured by Aldebaran Robotics, which markets the NAO robot for use in education, including special education for kids with ASDs. The robot, which stands at about 2 feet tall, looks like a toy but it’s priced more like a small car, Diehl noted.

The NAO H25 “Academic Edition” rings up at about $16,000. (Diehl said the study was funded by government and private grants, not the manufacturer.)

The researchers had 19 kids aged 6 to 13 complete 12 behavioral therapy sessions, where a therapist worked with the child on social skills. Half of the sessions involved the robot, named Kelly, which was wheeled out so the child could practice conversing with her, while the therapist stood by.

"So the child might say, ‘Hi Kelly, how are you?’" Diehl explained. "Then Kelly would say, ‘Fine. What did you do today?’" During the non-Kelly sessions, another person entered the room and carried on the same conversation with the child that the robot would have.

On average, Diehl’s team found, kids made bigger gains from the sessions that included Kelly — based on both their interactions with their therapists, and their parents’ reports.

"There was one child who, when his dad came home from work, asked him how his day was," Diehl said. "He’d never done that before."

Still, he stressed that while the robot sessions seemed more successful on average, the children varied widely in their responses to Kelly. Going forward, Diehl said, it will be important to figure out whether there are certain kids with ASDs more likely to benefit from a robot co-therapist.

Dawson agreed that there is no one-size-fits-all ASD therapy. “Any therapy for a person with an ASD has to be individualized,” she said. The idea with any technology, she added, is to give therapists and doctors extra “tools” to work with.

A separate study presented at the same meeting looked at another type of tool. Researchers had 60 “minimally verbal” children with ASDs attend two “play-based” sessions per week, aimed at boosting their ability to speak and gesture. Half of the kids were also given a “speech-generating device,” like an iPad.

Three and six months later, children who worked with the devices were able to say more words and were quicker to take up conversational skills.

Dawson said the robot and iPad studies are just part of the growing body of research into how technology can not only aid in ASD therapies, but also help doctors diagnose the disorders or help parents manage at home.

But both Diehl and Dawson stressed that no robot or iPad is intended to stand in for human connection. The idea, after all, is to enhance kids’ ability to communicate and have relationships, Dawson noted. “Technology will never take the place of people,” she said.

The data and conclusions of research presented at meetings should be viewed as preliminary until published in a peer-reviewed journal.

(Source: webmd.com)

Filed under ASD autism humanoid robots robots robotics communication social skills neuroscience psychology science

278 notes

PTSD research: distinct gene activity patterns from childhood abuse

Abuse during childhood is different.

image

A study of adult civilians with PTSD (post-traumatic stress disorder) has shown that individuals with a history of childhood abuse have distinct, profound changes in gene activity patterns, compared to adults with PTSD but without a history of child abuse.

A team of researchers from Atlanta and Munich probed blood samples from 169 participants in the Grady Trauma Project, a study of more than 5000 Atlanta residents with high levels of exposure to violence, physical and sexual abuse and with high risk for civilian PTSD.

The results were published Monday, April 29 in Proceedings of the National Academy of Sciences, Early Edition.

“These are some of the most robust findings to date showing that different biological pathways may describe different subtypes of a psychiatric disorder, which appear similar at the level of symptoms but may be very different at the level of underlying biology,” says Kerry Ressler, MD, PhD, professor of psychiatry and behavioral sciences at Emory University School of Medicine and Yerkes National Primate Research Center.

“As these pathways become better understood, we expect that distinctly different biological treatments would be implicated for therapy and recovery from PTSD based on the presence or absence of past child abuse.”

Ressler, a Howard Hughes Medical Institute Investigator, is co-director of the Grady Trauma Project, along with co-author Bekh Bradley, PhD, assistant professor of psychiatry and behavioral sciences at Emory and director of the Trauma Recovery Program at the Atlanta Veterans Affairs Medical Center.

The first author of the paper is Divya Mehta, PhD, a postdoctoral fellow in Munich. The senior author is Elisabeth Binder, MD, PhD, associate professor of psychiatry and behavioral sciences at Emory and group leader at the Max-Planck Institute of Psychiatry in Munich, Germany.

Mehta and her colleagues examined changes in the patterns of which genes were turned on and off in blood cells from patients. They also looked at patterns of methylation, a DNA modification on top of the four letters of the genetic code that causes genes to be ‘silenced’ or made inactive.

Study participants were divided into three groups: people who experienced trauma without developing PTSD, people with PTSD who were exposed to child abuse, and people with PTSD who were not exposed to child abuse.

The researchers were surprised to find that although hundreds of genes had significant changes in activity in the PTSD with and without child abuse groups, there was very little overlap in patterns between these groups. The two groups shared similar symptoms of PTSD, which include intrusive thoughts such as nightmares and flashbacks, avoidance of trauma reminders, and symptoms of hyperarousal and hypervigilance.

The PTSD with child abuse group displayed more changes in genes linked with development of the nervous system and regulation of the immune system, while the PTSD minus child abuse group displayed more changes in genes linked with apoptosis (cell death) and growth rate regulation. In addition, changes in methylation were more frequent in the PTSD with child abuse group. The authors believe that these biological pathways may lead to different mechanisms of PTSD symptom formation within the brain.

The Max Planck/Emory scientists were probing gene activity in blood cells, rather than brain tissue. Similar results have been obtained by researchers studying the influence of child abuse on the brains of people who had committed suicide.

“Traumatic events that happen in childhood are embedded in the cells for a long time,” Binder says. “Not only the disease itself, but the individual’s life experience is important in the biology of PTSD, and this should be to be reflected in the way we treat these disorders.”

(Source: news.emory.edu)

Filed under child abuse PTSD gene activity dna methylation blood cells psychology neuroscience science

171 notes

Researchers Successfully Treat Autism in Infants

Most infants respond to a game of peek-a-boo with smiles at the very least, and, for those who find the activity particularly entertaining, gales of laughter. For infants with autism spectrum disorders (ASD), however, the game can be distressing rather than pleasant, and they’ll do their best to tune out all aspects of it –– and that includes the people playing with them.

image

That disengagement is a hallmark of ASD, and one of the characteristics that amplifies the disorder as infants develop into children and then adults.

A study conducted by researchers at the Koegel Autism Center at UC Santa Barbara has found that replacing such games in favor of those the infant prefers can actually lessen the severity of the infants’ ASD symptoms, and, perhaps, alleviate the condition altogether. Their work is highlighted the current issue of the Journal of Positive Behavior Interventions.

Lynn Koegel, clinical director of the center and the study’s lead author, described the game-playing protocol as a modified Pivotal Response Treatment (PVT). Developed at UCSB, PRT is based on principles of positive motivation. The researchers identified the activities that seemed to be more enjoyable to the infants and taught the respective parents to focus on those rather than on the typical games they might otherwise choose. “We had them play with their infants for short periods, and then give them some kind of social reward,” Koegel said. “Over time, we conditioned the infants to enjoy all the activities that were presented by pairing the less desired activities with the highly desired ones.” The social reward is preferable to, say, a toy, Koegel noted, because it maintains the ever-crucial personal interaction.

"The idea is to get them more interested in people," she continued, "to focus on their socialization. If they’re avoiding people and avoiding interacting, that creates a whole host of other issues. They don’t form friendships, and then they don’t get the social feedback that comes from interacting with friends."

According to Koegel, by the end of the relatively short one- to three-month intervention period, which included teaching the parents how to implement the procedures, all the infants in the study had normal reactions to stimuli. “Two of the three have no disabilities at all, and the third is very social,” she said. “The third does have a language delay, but that’s more manageable than some of the other issues.”

On a large scale, Koegel hopes to establish some benchmark for identifying social deficits in infants so parents and health care providers can intervene sooner rather than later. “We have a grant from the Autism Science Foundation to look at lots of babies and try to really figure out which signs are red flags, and which aren’t,” she said. “A number of the infants who show signs of autism will turn out to be perfectly fine; but we’re saying, let’s not take the risk if we can put an intervention in play that really works. Then we don’t have to worry about whether or not these kids would develop the full-blown symptoms of autism.”

Historically, ASD is diagnosed in children 18 months or older, and treatment generally begins around 4 years. “You can pretty reliably diagnose kids at 18 months, especially the more severe cases,” said Koegel. “The mild cases might be a little harder, especially if the child has some verbal communication. There are a few measures –– like the ones we used in our study –– that can diagnose kids pre-language, even as young as six months. But ours was the first that worked with children under 12 months and found an effective intervention.”

Given the increasing number of children being diagnosed with ASD, Koegel’s findings could be life altering –– literally. “When you consider that the recommended intervention for preschoolers with autism is 30 to 40 hours per week of one-on-one therapy, this is a fairly easy fix,” she said. “We did a single one-hour session per week for four to 12 weeks until the symptoms improved, and some of these infants were only a few months old. We saw a lot of positive change.”

(Source: ia.ucsb.edu)

Filed under ASD autism infants socialization social interaction psychology neuroscience science

165 notes

Decoding ‘noisy’ language in daily life
Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”
A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.
“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.
Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.
“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”
Asymmetrical strategies
The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.
A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.
“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.
Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.
This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.
“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.
Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.
Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.
“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.
Reverse-engineering the message
Other scholars say the work helps illuminate the strategies people may use when they interpret language.
“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.
According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.
“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Decoding ‘noisy’ language in daily life

Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”

A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.

“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.

Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.

“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”

Asymmetrical strategies

The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.

A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.

“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.

Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.

This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.

“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.

Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.

Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.

“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.

Reverse-engineering the message

Other scholars say the work helps illuminate the strategies people may use when they interpret language.

“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.

According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.

“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”

Filed under language speech speech perception language processing linguistics psychology neuroscience science

78 notes

Size, wiring of brain structures in kids predict benefit from math tutoring

Why do some children learn math more easily than others? Research from the Stanford University School of Medicine has yielded an unexpected new answer.

In a study of third-graders’ responses to math tutoring, Stanford scientists found that the size and wiring of specific brain structures predicted how much an individual child would benefit from math tutoring. However, traditional intelligence measures, such as children’s IQs and their scores on tests of mathematical ability, did not predict improvements from tutoring.

image

The research is the first to use brain scans to look for a link between math-learning abilities and brain structure or function, and also the first to compare neural and cognitive predictors of kids’ responses to tutoring. In addition, it provides information on the differences between how children and adults learn math, and could help researchers understand the origins of math-learning disabilities.

The study was published online April 29 in Proceedings of the National Academy of Sciences.

"What was really surprising was that intrinsic brain measures can predict change - we can actually predict how much a child is going to learn during eight weeks of math tutoring based on measures of brain structure and connectivity," said Vinod Menon, PhD, the study’s senior author and a professor of psychiatry and behavioral sciences. Menon is also a member of the Child Health Research Institute at Lucile Packard Children’s Hospital.

"The results are a significant step toward the development of targeted learning programs based on a child’s current as well as predicted learning trajectory," said the study’s lead author, Kaustubh Supekar, PhD, postdoctoral scholar in psychiatry and behavioral sciences.

Menon’s team focused on third-grade students ages 8 and 9 because these children are at a critical stage for acquiring basic arithmetic skills. The study included 24 third-graders who participated in a well-validated program of 15 to 20 hours of individualized math tutoring over eight weeks. The tutors explained new concepts to children and also got them to practice math skills with an emphasis on speed, and the sessions were tailored to each child’s level of understanding.

Before tutoring began, the children were given several standard neuropsychological assessments, including tests of IQ, working memory, reading and math-problem-solving abilities. Both before and after the eight-week tutoring period, children’s arithmetic performance was tested, and all children had structural and functional magnetic resonance imaging scans performed on their brains. To control for the effects of math instruction the children received at school (rather than during tutoring), a comparison group of 16 third-grade children who did not receive tutoring, but who had the same testing and brain scans before and after an eight-week interval, was also included in the study.

All 24 children receiving tutoring improved their arithmetic performance. Their performance efficiency, a composite measure of accuracy and speed of problem solving, improved an average of 67 percent after tutoring. But individual gains varied widely, ranging from 8 percent to 198 percent improvement. The children who did not receive tutoring did not show any change in arithmetic performance during the study.

When the researchers analyzed the children’s structural brain scans, they found that larger gray matter volume in three brain structures predicted greater ability to benefit from math tutoring. (The predictions were generated with a machine learning algorithm, the same type of data-analysis tool used to create movie recommendations for users of websites like Netflix, for example.) Of the three structures, the best predictor of improvement with tutoring was a larger hippocampus, a structure traditionally considered one of the brain’s most important memory centers. Functional connections between the hippocampus and several other brain regions, especially the prefrontal cortex and basal ganglia, also predicted ability to benefit from tutoring. These regions are important for forming long-term memories.

"The part of the brain that is recruited in memories for places and events also plays a pivotal role in determining how much and how well a child learns math," Supekar said.

None of the neuropsychological assessment scores, such as IQ or tests of working memory, could predict how much an individual child would benefit from tutoring.

The brain systems highlighted by this study - including the hippocampus, basal ganglia and prefrontal cortex - are different from those previously implicated for math learning in adults, the researchers noted. When solving math problems, adults rely on brain regions that are specialized for representing complex visual objects and processing spatial information.

And the findings suggest that the tutoring approach used, which was tailored to each child’s level of understanding and included lots of repetitive, high-speed arithmetic practice to help cement facts in children’s heads, works because it is compatible with the way their brains encode facts. “Memory resources provided by the hippocampal system create a scaffold for learning math in the developing brain,” Menon said. “Our findings suggest that, while conceptual knowledge about numbers is necessary for math learning, repeated, speeded practice and testing of simple number combinations is also needed to encode facts and encourage children’s reliance on retrieval - the most efficient strategy for answering simple arithmetic problems.” Once kids are able to pull up answers to basic arithmetic problems automatically from memory, their brains can tackle more complex problems.

The researchers’ next steps will include comparing brain structure and wiring in children with and without math learning disabilities, analyzing how the wiring of the brain changes in response to tutoring and examining whether lower-performing children’s brains can be exercised to help them learn math. “We’re pushing a very ecologically relevant model of learning,” Menon said. “Academic instruction should rely on validated instructional principles while incorporating individualized training to provide feedback on whether students are right or wrong, how they’re wrong and how they can improve their math skills.”

(Source: med.stanford.edu)

Filed under children math tutoring brain connections brain scans psychology neuroscience science

69 notes

Ear-witness precision: Congenitally blind people have more accurate memories

Distortions and illusions within human memory are well documented in scientific and forensic work and appear to be a basic feature of memory functioning.

image

Yet several studies suggest that blind individuals, especially those without any visual experience, possess superior verbal and memory skills.

The researchers from the Department of Psychology ran memory tests on groups of congenitally blind people, those with late onset blindness and sighted people, in collaboration with a research assistant at Queen Mary, University of London.

Each participant was asked to listen to a series of word lists and then recall the words they heard. Past research has found that such words lists normally cause people to falsely “remember” words that are related to those heard, but that were never actually experienced. For example hearing ‘chimney’, ‘cigar’, and ‘fire’ can prompt some to produce a false memory of the word ‘smoke’ when asked to remember the list of words.

The researchers found that not only did the congenitally blind participants remember more words but were also less likely to create false memories of the related words. In contrast, the sighted and late blind participants remembered fewer words and were much more likely to falsely remember the related words that were not read to the participants.

Dr Achille Pasqualotto, postdoctoral researcher and first author of the study, said: “We found that congenitally blind participants reported significantly more correct words than both late onset blind and sighted people. Most of the congenitally blind participants avoided unrelated words, therefore congenitally blind participants can store more items and with a higher fidelity.”

Dr Michael Proulx who led the study added: “Our results show that visual experience has a significant negative impact on both the number of items remembered and the accuracy of semantic memory and also demonstrate the importance of adaptive neural plasticity in the congenitally blind brain for enhanced memory retrieval mechanisms.

“There is an old Hebrew proverb that believes the blind were the most trustworthy sources for quotations and that certainly seems true in this case. It will be interesting to see whether congenitally blind individuals would also be better witnesses in forensic studies.”

The researched is from the paper Congenital blindness improves semantic and episodic memory, published in the journal Behavioural Brain Research.

(Source: bath.ac.uk)

Filed under congenital blindness false memories memory visual experience psychology neuroscience science

241 notes

Psychopaths are not neurally equipped to have concern for others
Prisoners who are psychopaths lack the basic neurophysiological “hardwiring” that enables them to care for others, according to a new study by neuroscientists at the University of Chicago and the University of New Mexico.
“A marked lack of empathy is a hallmark characteristic of individuals with psychopathy,” said the lead author of the study, Jean Decety, the Irving B. Harris Professor in Psychology and Psychiatry at UChicago. Psychopathy affects approximately 1 percent of the United States general population and 20 percent to 30 percent of the male and female U.S. prison population. Relative to non-psychopathic criminals, psychopaths are responsible for a disproportionate amount of repetitive crime and violence in society.
“This is the first time that neural processes associated with empathic processing have been directly examined in individuals with psychopathy, especially in response to the perception of other people in pain or distress,” he added. 
The results of the study, which could help clinical psychologists design better treatment programs for psychopaths, are published in the article, “Brain Responses to Empathy-Eliciting Scenarios Involving Pain in Incarcerated Individuals with Psychopathy,” which appears online April 24 in the journal JAMA Psychiatry.
Joining Decety in the study were Laurie Skelly, a graduate student at UChicago; and Kent Kiehl, professor of psychology at the University of New Mexico.
For the study, the research team tested 80 prisoners between ages 18 and 50 at a correctional facility. The men volunteered for the test and were tested for levels of psychopathy using standard measures.
They were then studied with functional MRI technology, to determine their responses to a series of scenarios depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions showing pain.
The participants in the high psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, lateral orbitofrontal cortex, amygdala and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants, the study found. 
The high response in the insula in psychopaths was an unexpected finding, as this region is critically involved in emotion and somatic resonance. Conversely, the diminished response in the ventromedial prefrontal cortex and amygdala is consistent with the affective neuroscience literature on psychopathy. This latter region is important for monitoring ongoing behavior, estimating consequences and incorporating emotional learning into moral decision-making, and plays a fundamental role in empathic concern and valuing the well-being of others.
“The neural response to distress of others such as pain is thought to reflect an aversive response in the observer that may act as a trigger to inhibit aggression or prompt motivation to help,” the authors write in the paper.
“Hence, examining the neural response of individuals with psychopathy as they view others being harmed or expressing pain is an effective probe into the neural processes underlying affective and empathy deficits in psychopathy,” the authors wrote.
Decety is one of the world’s leading experts on the biological underpinnings of empathy. His work also focuses on the development of empathy and morality in children.

Psychopaths are not neurally equipped to have concern for others

Prisoners who are psychopaths lack the basic neurophysiological “hardwiring” that enables them to care for others, according to a new study by neuroscientists at the University of Chicago and the University of New Mexico.

“A marked lack of empathy is a hallmark characteristic of individuals with psychopathy,” said the lead author of the study, Jean Decety, the Irving B. Harris Professor in Psychology and Psychiatry at UChicago. Psychopathy affects approximately 1 percent of the United States general population and 20 percent to 30 percent of the male and female U.S. prison population. Relative to non-psychopathic criminals, psychopaths are responsible for a disproportionate amount of repetitive crime and violence in society.

“This is the first time that neural processes associated with empathic processing have been directly examined in individuals with psychopathy, especially in response to the perception of other people in pain or distress,” he added. 

The results of the study, which could help clinical psychologists design better treatment programs for psychopaths, are published in the article, “Brain Responses to Empathy-Eliciting Scenarios Involving Pain in Incarcerated Individuals with Psychopathy,” which appears online April 24 in the journal JAMA Psychiatry.

Joining Decety in the study were Laurie Skelly, a graduate student at UChicago; and Kent Kiehl, professor of psychology at the University of New Mexico.

For the study, the research team tested 80 prisoners between ages 18 and 50 at a correctional facility. The men volunteered for the test and were tested for levels of psychopathy using standard measures.

They were then studied with functional MRI technology, to determine their responses to a series of scenarios depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions showing pain.

The participants in the high psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, lateral orbitofrontal cortex, amygdala and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants, the study found. 

The high response in the insula in psychopaths was an unexpected finding, as this region is critically involved in emotion and somatic resonance. Conversely, the diminished response in the ventromedial prefrontal cortex and amygdala is consistent with the affective neuroscience literature on psychopathy. This latter region is important for monitoring ongoing behavior, estimating consequences and incorporating emotional learning into moral decision-making, and plays a fundamental role in empathic concern and valuing the well-being of others.

“The neural response to distress of others such as pain is thought to reflect an aversive response in the observer that may act as a trigger to inhibit aggression or prompt motivation to help,” the authors write in the paper.

“Hence, examining the neural response of individuals with psychopathy as they view others being harmed or expressing pain is an effective probe into the neural processes underlying affective and empathy deficits in psychopathy,” the authors wrote.

Decety is one of the world’s leading experts on the biological underpinnings of empathy. His work also focuses on the development of empathy and morality in children.

Filed under psychopaths empathy fMRI brain activity ventromedial prefrontal cortex striatum amygdala psychology neuroscience science

196 notes

Anti-Smoking Ads with Strong Arguments, Not Flashy Editing, Trigger Part of Brain That Changes Behavior
Researchers from the Perelman School of Medicine at the University of Pennsylvania have shown that an area of the brain that initiates behavioral changes had greater activation in smokers who watched anti-smoking ads with strong arguments versus those with weaker ones, and irrespective of flashy elements, like bright and rapidly changing scenes, loud sounds and unexpected scenario twists. Those smokers also had significantly less nicotine metabolites in their urine when tested a month after viewing those ads, the team reports in a new study published online April 23 in the Journal of Neuroscience.
This is the first time research has shown an association between cognition and brain activity in response to content and format in televised ads and behavior.
In a study of 71 non-treatment-seeking smokers recruited from the Philadelphia area, the team, led by Daniel D. Langleben, M.D., a psychiatrist in the Center for Studies of Addiction at Penn Medicine, identified key brain regions engaged in the processing of persuasive communications using fMRI, or functional magnetic resonance imaging. They found that a part of the brain involved in future behavioral changes—known as the dorsomedial prefrontal cortex (dMPFC)—had greater activation when smokers watched an anti-smoking ad with a strong argument versus a weak one.
One month after subjects watched the ads, the researchers sampled smokers’ urine cotinine levels (metabolite of nicotine) and found that those who watched the strong ads had significantly less cotinine in their urine compared to their baseline versus those who watched weaker ads.
Even ads riddled with attention-grabbing tactics, the research suggests, are not effective at reducing tobacco intake unless their arguments are strong. However, ads with flashy editing and strong arguments, for example, produced better recognition.
 “We investigated the two major dimensions of any piece of media, content and format, which are both important here,” said Dr. Langleben, who is also an associate professor in the department of Psychiatry. “If you give someone an unconvincing ad, it doesn’t matter what format you do on top of that. You can make it sensational. But in terms of effectiveness, content is more important. You’re better off adding in more sophisticated editing and other special effects only if it is persuasive.”
The paper may enable improved methods of design and evaluation of public health advertising, according to the authors, including first author An-Li Wang, PhD, of the Annenberg Public Policy Center at the University of Pennsylvania. And it could ultimately influence how producers shape the way ads are constructed, and how ad production budgets are allocated, considering special effects are expensive endeavors versus hiring screenwriters. 
A 2009 study by Dr. Langleben and colleagues that looked solely at format found people were more likely to remember low-key, anti-smoking messages versus attention-grabbing messages. This was the first research to show that low-key versus attention-grabbing ads stimulated different patterns of activity, particularly in the frontal cortex and temporal cortex. But it did not address content strength or behavioral changes.
This new study is the first longitudinal investigation of the cognitive, behavioral, and neurophysical response to the content and format of televised anti-smoking ads, according to the authors.
“This sets the stage for science-based evaluation and design of persuasive public health advertising,” said Dr. Langleben. “An ad is only as strong as its central argument, which matters more than its audiovisual presentation. Future work should consider supplementing focus groups with more technology-heavy assessments, such as brain responses to these ads, in advance of even putting the ad together in its entirety.”
(Image credit)

Anti-Smoking Ads with Strong Arguments, Not Flashy Editing, Trigger Part of Brain That Changes Behavior

Researchers from the Perelman School of Medicine at the University of Pennsylvania have shown that an area of the brain that initiates behavioral changes had greater activation in smokers who watched anti-smoking ads with strong arguments versus those with weaker ones, and irrespective of flashy elements, like bright and rapidly changing scenes, loud sounds and unexpected scenario twists. Those smokers also had significantly less nicotine metabolites in their urine when tested a month after viewing those ads, the team reports in a new study published online April 23 in the Journal of Neuroscience.

This is the first time research has shown an association between cognition and brain activity in response to content and format in televised ads and behavior.

In a study of 71 non-treatment-seeking smokers recruited from the Philadelphia area, the team, led by Daniel D. Langleben, M.D., a psychiatrist in the Center for Studies of Addiction at Penn Medicine, identified key brain regions engaged in the processing of persuasive communications using fMRI, or functional magnetic resonance imaging. They found that a part of the brain involved in future behavioral changes—known as the dorsomedial prefrontal cortex (dMPFC)—had greater activation when smokers watched an anti-smoking ad with a strong argument versus a weak one.

One month after subjects watched the ads, the researchers sampled smokers’ urine cotinine levels (metabolite of nicotine) and found that those who watched the strong ads had significantly less cotinine in their urine compared to their baseline versus those who watched weaker ads.

Even ads riddled with attention-grabbing tactics, the research suggests, are not effective at reducing tobacco intake unless their arguments are strong. However, ads with flashy editing and strong arguments, for example, produced better recognition.

 “We investigated the two major dimensions of any piece of media, content and format, which are both important here,” said Dr. Langleben, who is also an associate professor in the department of Psychiatry. “If you give someone an unconvincing ad, it doesn’t matter what format you do on top of that. You can make it sensational. But in terms of effectiveness, content is more important. You’re better off adding in more sophisticated editing and other special effects only if it is persuasive.”

The paper may enable improved methods of design and evaluation of public health advertising, according to the authors, including first author An-Li Wang, PhD, of the Annenberg Public Policy Center at the University of Pennsylvania. And it could ultimately influence how producers shape the way ads are constructed, and how ad production budgets are allocated, considering special effects are expensive endeavors versus hiring screenwriters. 

A 2009 study by Dr. Langleben and colleagues that looked solely at format found people were more likely to remember low-key, anti-smoking messages versus attention-grabbing messages. This was the first research to show that low-key versus attention-grabbing ads stimulated different patterns of activity, particularly in the frontal cortex and temporal cortex. But it did not address content strength or behavioral changes.

This new study is the first longitudinal investigation of the cognitive, behavioral, and neurophysical response to the content and format of televised anti-smoking ads, according to the authors.

“This sets the stage for science-based evaluation and design of persuasive public health advertising,” said Dr. Langleben. “An ad is only as strong as its central argument, which matters more than its audiovisual presentation. Future work should consider supplementing focus groups with more technology-heavy assessments, such as brain responses to these ads, in advance of even putting the ad together in its entirety.”

(Image credit)

Filed under anti-smoking ads behavioral changes brain activity fMRI neuroscience psychology science

52 notes

'Clean' your memory to pick a winner
Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.
In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.
In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.
Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.
Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”
In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.
Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.
The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.
Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.
Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.
Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”
Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.
Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

'Clean' your memory to pick a winner

Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.

In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.

In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.

Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.

Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”

In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.

Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.

The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.

Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.

Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.

Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”

Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.

Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

Filed under brain statistical probability decision-making prediction psychology neuroscience science

74 notes

Red Light Increases Alertness During “Post-Lunch Dip”
Acute or chronic sleep deprivation resulting in increased feelings of fatigue is one of the leading causes of workplace incidents and related injuries. More incidents and performance failures, such as automobile accidents, occur in the mid-afternoon hours known as the “post-lunch dip.” The post-lunch dip typically occurs from 2-4 p.m., or about 16-18 hours after an individual’s bedtime from the previous night.
A new study from the Lighting Research Center (LRC) at Rensselaer Polytechnic Institute shows that exposure to certain wavelengths and levels of light has the potential to increase alertness during the post-lunch dip. The research was a collaboration between Mariana Figueiro, LRC Light and Health Program director and associate professor at Rensselaer, and LRC doctoral student Levent Sahin. Results of the study titled “Alerting effects of short-wavelength (blue) and long-wavelength (red) lights in the afternoon,” were recently published in Physiology & Behavior journal.
The collaboration between Figueiro and Sahin lays the groundwork for the possible use of tailored light exposures as a non-pharmacological intervention to increase alertness during the daytime. Figueiro has previously conducted studies that show that light has the potential to increase alertness at night. Exposure to more than 2500 lux of white light at night increases performance, elevates core body temperature, and increases heart rate.
In most studies to date, the alerting effects of light have been linked to its ability to suppress melatonin. However, results from another study led by Figueiro demonstrate that acute melatonin suppression is not needed for light to affect alertness during the nighttime. They showed that both short-wavelength (blue) and long-wavelength (red) lights increased measures of alertness but only short-wavelength light suppressed melatonin. Melatonin levels are typically lower during the daytime, and higher at night.
Figueiro and Sahin hypothesized that if light can impact alertness via pathways other than melatonin suppression, then certain wavelengths and levels of light might also increase alertness during the middle of the afternoon, close to the post-lunch dip hours.
During the study conducted at the LRC, participants experienced two experimental lighting conditions in addition to darkness. Long-wavelength “red” light (λmax = 630 nanometers) and short-wavelength “blue” light (λmax = 470 nanometers) were delivered to the corneas of each participant by arrays of light emitting diodes (LEDs) placed in 60 × 60 × 60 cm light boxes. Participant alertness was measured by electroencephalogram (EEG) and subjective sleepiness (KSS scale).
The team found that, compared to remaining in darkness, exposure to red light in the middle of the afternoon significantly reduces power in the alpha, alpha theta, and theta ranges. Because high power in these frequency ranges has been associated with sleepiness, these results suggest that red light positively affects measures of alertness not only at night, but also during the day. Red light also seemed to be a more potent stimulus for modulating brain activities associated with daytime alertness than blue light, although they did not find any significant differences in measures of alertness after exposure to red and blue lights. This suggests that blue light, especially higher levels of blue light, could still increase alertness in the afternoon. It appears that melatonin suppression is not needed for light to have an impact on objective measures of alertness.
“Our study suggests that photoreceptors other than the intrinsically photosensitive retinal ganglion cells respond to light for the arousal system,” said Figueiro. “Future research should look into the spectral sensitivity of alertness and how it changes over the course of 24 hours.”
Sahin, who has more than 10 years of experience in railway engineering, was interested in this study from a transportation safety perspective, and what the results could mean to the transportation industry. “Safety is a prerequisite and one of the most important quality indicators in the transportation industry,” said Sahin. “Our recent findings provided the scientifically valid underpinnings in approaching fatigue related safety problems in 24 hour transportation operations.”
From the present results, it is not possible to determine the underlying mechanisms contributing to light-induced changes in alertness because the optical radiation incident on the retina has multiple effects on brain activity through parallel neural pathways. According to Figueiro, that is an area that she would like to explore in future research.

Red Light Increases Alertness During “Post-Lunch Dip”

Acute or chronic sleep deprivation resulting in increased feelings of fatigue is one of the leading causes of workplace incidents and related injuries. More incidents and performance failures, such as automobile accidents, occur in the mid-afternoon hours known as the “post-lunch dip.” The post-lunch dip typically occurs from 2-4 p.m., or about 16-18 hours after an individual’s bedtime from the previous night.

A new study from the Lighting Research Center (LRC) at Rensselaer Polytechnic Institute shows that exposure to certain wavelengths and levels of light has the potential to increase alertness during the post-lunch dip. The research was a collaboration between Mariana Figueiro, LRC Light and Health Program director and associate professor at Rensselaer, and LRC doctoral student Levent Sahin. Results of the study titled “Alerting effects of short-wavelength (blue) and long-wavelength (red) lights in the afternoon,” were recently published in Physiology & Behavior journal.

The collaboration between Figueiro and Sahin lays the groundwork for the possible use of tailored light exposures as a non-pharmacological intervention to increase alertness during the daytime. Figueiro has previously conducted studies that show that light has the potential to increase alertness at night. Exposure to more than 2500 lux of white light at night increases performance, elevates core body temperature, and increases heart rate.

In most studies to date, the alerting effects of light have been linked to its ability to suppress melatonin. However, results from another study led by Figueiro demonstrate that acute melatonin suppression is not needed for light to affect alertness during the nighttime. They showed that both short-wavelength (blue) and long-wavelength (red) lights increased measures of alertness but only short-wavelength light suppressed melatonin. Melatonin levels are typically lower during the daytime, and higher at night.

Figueiro and Sahin hypothesized that if light can impact alertness via pathways other than melatonin suppression, then certain wavelengths and levels of light might also increase alertness during the middle of the afternoon, close to the post-lunch dip hours.

During the study conducted at the LRC, participants experienced two experimental lighting conditions in addition to darkness. Long-wavelength “red” light (λmax = 630 nanometers) and short-wavelength “blue” light (λmax = 470 nanometers) were delivered to the corneas of each participant by arrays of light emitting diodes (LEDs) placed in 60 × 60 × 60 cm light boxes. Participant alertness was measured by electroencephalogram (EEG) and subjective sleepiness (KSS scale).

The team found that, compared to remaining in darkness, exposure to red light in the middle of the afternoon significantly reduces power in the alpha, alpha theta, and theta ranges. Because high power in these frequency ranges has been associated with sleepiness, these results suggest that red light positively affects measures of alertness not only at night, but also during the day. Red light also seemed to be a more potent stimulus for modulating brain activities associated with daytime alertness than blue light, although they did not find any significant differences in measures of alertness after exposure to red and blue lights. This suggests that blue light, especially higher levels of blue light, could still increase alertness in the afternoon. It appears that melatonin suppression is not needed for light to have an impact on objective measures of alertness.

“Our study suggests that photoreceptors other than the intrinsically photosensitive retinal ganglion cells respond to light for the arousal system,” said Figueiro. “Future research should look into the spectral sensitivity of alertness and how it changes over the course of 24 hours.”

Sahin, who has more than 10 years of experience in railway engineering, was interested in this study from a transportation safety perspective, and what the results could mean to the transportation industry. “Safety is a prerequisite and one of the most important quality indicators in the transportation industry,” said Sahin. “Our recent findings provided the scientifically valid underpinnings in approaching fatigue related safety problems in 24 hour transportation operations.”

From the present results, it is not possible to determine the underlying mechanisms contributing to light-induced changes in alertness because the optical radiation incident on the retina has multiple effects on brain activity through parallel neural pathways. According to Figueiro, that is an area that she would like to explore in future research.

Filed under alertness sleepiness sleep deprivation melatonin post-lunch dip wavelength lights fatigue neuroscience psychology science

free counters