Posts tagged psychology

Posts tagged psychology
'Out-of-body' virtual experience could help social anxiety
New virtual imaging technology could be used as part of therapy to help people get over social anxiety according to new research from the University of East Anglia (UEA).
Research published today investigated for the first time whether people with social anxiety could benefit from seeing themselves interacting in social situations via video capture.
The experiment gave participants the chance to experience social interaction in the safety of a virtual environment by seeing their own life-size image projected into specially scripted real-time video scenes.
UEA researchers, led by Dr Lina Gega from UEA’s Norwich Medical School and MHCO’s Northumberland Talking Therapies, worked with Xenodu Virtual Environments to create more than 100 different social scenarios – such as using public transport, buying a drink at a bar, socialising at a party, shopping, and talking to a stranger in an art gallery.
The researchers tested whether this sort of experience could become a valuable part of Cognitive Behavioural Therapy (CBT) by including an hour-long session midway through a 12-week CBT course.
Dr Gega said: “People with social anxiety are afraid that they will draw attention to themselves and be negatively judged by others in social situations. Many will either avoid public places and social gatherings altogether, or use safety behaviours to cope – such as not making eye contact and being guarded or hyper-vigilant towards others.
“Paradoxically, this sort of behaviour draws attention to people with social anxiety and feeds into their beliefs that they don’t fit in.
“We wanted to see whether practising social situations in a virtual environment could help.”
Paul Strickland from Xenodu, the company behind the virtual environment system, said: “Our system uses video capture to project a user’s life-size image on screen so that they can watch themselves interacting with custom-scripted and digitally edited video clips.
“It isn’t a head-mounted display – which anxious people may find uncomfortable,” he added. “Instead, the user observes from an out-of-body perspective. They can then simultaneously view themselves and interact with the characters of the film.”
Dr Gega’s project focused on six socially anxious young men recovering from psychosis who also have debilitating social anxiety. The participants engaged with a range of scenarios, some of which were designed to feature rude and hostile people. The virtual environments encouraged participants to practice small-talk, maintain eye contact, test beliefs that they wouldn’t know what to say, and resist safety behaviour such as looking at the floor or being hyper-vigilant.
The main benefits of using these virtual environments in therapy was that it helped participants notice and change anxious behaviours in a safe, controlled environment which could be rehearsed over and over again. Participants were found to drop safety behaviours and take greater social risks. And while realistic to an extent, the ‘fake’ feeling of staged scenarios in itself proved to be a virtue.
“It helped the participants question their interpretation of social cues,” said Dr Gega. “For example, if they thought that one of the characters was looking at them ‘funny’ they could immediately see that there must be an alternative explanation because the scenarios were artificial.
“Another useful aspect of the system is that it can be tailored to address specific fears in social situations - for example a fear of performance, intimacy, or crowds,” she added.
“Two of the patients said that the system felt “weird and surreal”, so the element of having an out-of-body experience is something to study further in future – particularly because psychosis itself is defined by a distorted perception of reality.
“This research explored the feasibility and potential added value of using virtual environments as part of CBT. The next stage would be to carry out a randomised, controlled comparison of CBT with and without the virtual environment system to test whether using the system as a therapy tool leads to greater or quicker symptom improvement.”
Mr Strickland added: “I hope our technology can help make a difference to the lives of people experiencing social anxiety and other specific anxiety conditions for which controlled exposure to feared situations is part of therapy. It is particularly versatile because it doesn’t need technical expertise to set up and use. And the library of scenarios can be built on to capture different types of exposure environments needed in day-to-day clinical practice.”
‘Virtual Environments Using Video Capture for Social Phobia with Psychosis’ is published by the journal Cyberpsychology, Behaviour and Social Networking.
Trying to Learn a Foreign Language? Avoid Reminders of Home
Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. “I meant to say a transition word like ‘however,’ but used the Chinese version instead,” she says. “It really shocked me.”
Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one’s homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country.
Previous studies have shown that cultural icons such as landmarks and celebrities act like “magnets of meaning,” instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture.
To determine whether these cultural icons can also interfere with speaking a second language, Zhang, Morris, and their colleagues recruited male and female Chinese students who had lived in the United States for a less than a year and had them sit opposite a computer monitor that displayed the face of either a Chinese or Caucasian male called “Michael Lee.” As microphones recorded their speech, the volunteers conversed with Lee, who spoke to them in English with an American accent about campus life.
Next, the team compared the fluency of the volunteers’ speech when they were talking to a Chinese versus a Caucasian face. Although participants reported a more positive experience chatting with the Chinese version of “Michael Lee,” they were significantly less fluent, producing 11% fewer words per minute on average, the authors report online today in the Proceedings of the National Academy of Sciences. “It’s ironic” that the more comfortable volunteers were with their conversational partner, the less fluent they became, Zhang says. “That’s something we did not expect.”
To rule out the possibility that the volunteers were speaking more fluently to the Caucasian face on purpose, thus explaining the performance gap, Zhang and colleagues asked the participants to invent a story, such as a boy swimming in the ocean, while simultaneously being exposed to Chinese and American icons rather than faces. Seeing Chinese icons such as the Great Wall also interfered with the volunteers’ English fluency, causing a 16% drop in words produced per minute. The icons also made the volunteers 85% more likely to use a literal translation of the Chinese word for an object rather than the English term, Zhang says. Rather than saying “pistachio,” for example, volunteers used the Chinese version, “happy nuts.”
Understanding how these subtle cultural cues affect language fluency could help employers design better job interviews, Morris says. For example, taking a Japanese job candidate out for sushi, although a well-meaning gesture, might not be the best way to help them shine.
"It’s quite striking that these effects were so robust," says Mary Helen Immordino-Yang, a developmental psychologist at the University of Southern California in Los Angeles. They show that "we’re exquisitely attuned to cultural context," she says, and that "even subtle cues like the ethnicity of the person we’re talking to" can affect language processing. The take-home message? "If one wants to acculturate rapidly, don’t move to an ethnic enclave neighborhood where you’ll be surrounded by people like yourself," Morris says. Sometimes, a familiar face is the last thing you need to see.
Online games offer trove of brain data
Study of 35 million users of brain-training software finds alcohol and sleep linked to cognitive performance.
By trawling through data from 35 million users of online ‘brain-training’ tools, researchers have conducted a survey of what they say is the world’s largest data set of human cognitive performance. Their preliminary results show that drinking moderately correlates with better cognitive performance and that sleeping too little or too much has a negative association.
The study, published this week in Frontiers in Human Neuroscience, analysed user data from Lumosity, a collection of web-based games made by Lumos Labs, based in San Francisco, California. Researchers at Lumos conducted the study in collaboration with scientists at two US universities as part of the Human Cognition Project, which the authors describe as “a collaborative research effort to describe the human mind”.
The authors examined results from more than 600 million completed tasks — which measured players’ speed, memory capacity and cognitive flexibility — to get a snapshot of how lifestyle factors can affect cognition and how learning ability changes with age.
Users who enjoyed one or two alcoholic drinks a day tended to perform better on cognitive tasks than teetotallers and heavier drinkers, whose scores dropped as the number of daily drinks increased. The optimal sleep time was seven hours, with performance worsening for every hour of sleep lost or added.
The study authors also looked at performance over time for users who returned to the same brain-training tasks at least 25 times. Performance decreased with age, but the ability to learn new tasks that relied on ‘crystallized knowledge’ (such as vocabulary) did not decline as quickly as it did for those that measured ‘fluid intelligence’ (such as the ability to memorize new sets of information).
Daniel Sternberg, a data scientist at Lumos who led the study, and his colleagues say that their study sample is much broader than those of most psychological studies, which tend to draw from pools of university students.
Buzzwords and biased samples?
But Frederick Unverzagt, neuropsychologist at Indiana University in Indianapolis, who has studied other cognitive-training tools such as training courses in verbal reasoning or speed processing in patients with dementia, says that the sample in this study is also biased: the users of brain-training tools are younger (compared to the typical dementia patients), most of them live in the United States or Europe and, most importantly, they are likely to already be interested in cognitive-training tasks. Although Lumosity has a pool of 35 million users, when the researchers looked at changes in performance over time, they focused on groups of about 22,000 people.
“From a trials perspective, this is very selective,” says Fred Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who has also studied the efficacy of brain-training techniques. “The lower performance scores they saw in older individuals,” he says, “could be attributable to the fact that the older adults were the ones who stuck with it for a long time because they were the ones who needed the training the most.”
And the findings are not controversial or particularly surprising. “But what is interesting and important is this idea that we can have a new paradigm for doing this kind of research: looking at large data sets in order to look at many different kinds of people, to tease out the demographic and lifestyle factors that influence cognition,” says Sternberg. “There are many other interesting questions that other researchers could answer by using this data set — this is just the tip of the iceberg.”
Time perception altered by mindfulness meditation
New published research from psychologists at the universities of Kent and Witten/Herdecke has shown that mindfulness meditation has the ability to temporarily alter practitioners’ perceptions of time – a finding that has wider implications for the use of mindfulness both as an everyday practice, and in clinical treatments and interventions.
Led by Dr Robin Kramer from Kent’s School of Psychology, the research team hypothesised that, given mindfulness’ emphasis on moment-to-moment awareness, mindfulness meditation would slow down time and produce the feeling that short periods of time lasted longer.
To test this hypothesis, they used a temporal bisection task, which allows researchers to measure where each individual subjectively splits a period of time in half. Participants’ responses to this task were collected twice, once before and then again after a listening task. By separating people into two groups, participants listened for ten minutes to either an audiobook or a meditation exercise designed to focus their attention on the movement of breath in the body. The results showed that the control group (audiobook) didn’t change in their responses after the listening task compared with before. However, meditation led to a relative overestimation of durations i.e. time periods felt longer than they had before.
The reasons for this have been interpreted by Dr Kramer and team as the result of attentional changes, producing either improved attentional resources that allow increased attention to the processing of time, or a shift to internally-oriented attention that would have the same effect.
Dr Kramer said: ‘Our findings represent some of the first to demonstrate how mindfulness meditation can alter the perception of time. Given the increasing popularity of mindfulness in everyday practice, its relationship with time perception may provide an important step in our understanding of this pervasive, ancient practice in our modern world.’
Dr Kramer also explained that the benefits of mindfulness and mindfulness-based therapies in a variety of domains are now being identified. These include decreases in rumination, improvements in cognitive flexibility, working memory capacity and sustained attention, and reductions in reactivity, anxiety and depressive symptoms. Mindfulness-based treatments also appear to provide broad antidepressant and antianxiety effects, as well as decreases in general psychological distress. As such, these interventions have been applied with a variety of patients, including those suffering from fibromyalgia, psoriasis, cancer, binge eating and chronic pain.
Dr Dinkar Sharma, Senior Lecturer in Psychology at Kent, commented: ‘Demonstrating that mindfulness has an effect on time perception is important because it opens up the opportunity that mindfulness could be used to alter psychological disorders that are associated with a range of distortions in the perception of time - such as disorders of memory, emotion and addiction.’
Dr Ulrich Weger, of Witten/Herdecke’s Department of Psychology and Psychotherapy, concluded by stating that ‘the impact of a brief mindfulness exercise on elementary processes such as time perception is remarkable’.
No matter how we jump, roll, sit, or lie down, our brain manages to maintain a visual representation of the world that stays upright relative to the pull of gravity. But a new study of rider experiences on the Hong Kong Peak Tram, a popular tourist attraction, shows that specific features of the environment can dominate our perception of verticality, making skyscrapers appear to fall.

The study is published in Psychological Science, a journal of the Association for Psychological Science.
The Hong Kong Peak Tram to Victoria Peak is a popular way to survey the Hong Kong skyline and millions of people ride the tram every year.
“On one trip, I noticed that the city’s skyscrapers next to the tram started to appear very tilted, as if they were falling, which anyone with common sense knows is impossible,” says lead researcher Chia-huei Tseng of the University of Hong Kong. “The gasps of the other passengers told me I wasn’t the only one seeing it.”
The illusion was perplexing because, in contrast with most illusions studied in the laboratory, observers have complete access to visual cues from the outside world through the tram’s open windows.
Exploring the illusion under various conditions, Tseng and colleagues found that the perceived, or illusory, tilt was greatest on night-time rides, perhaps a result of the relative absence of visual-orientation cues or a heightened sense of enclosure at night. Enhancing the tilted frame of reference within the tram car — indicated by features like oblique window frames, beams, floor, and lighting fixtures — makes the true vertical of the high rises seem to tilt in the opposite direction.
The illusion was significantly reduced by obscuring the window frame and other reference cues inside the tram car, by using wedges to adjust observers’ position, and by having them stand during the tram ride.
But no single modification was sufficient to eliminate the illusion.
“Our findings demonstrate that signals from all the senses must be consonant with each other to abolish the tilt illusion,” the researchers write. “On the tram, it seems that vision dominates verticality perception over other sensory modalities that also mediate earth gravity, such as the vestibular and tactile systems.”
The robustness of the tram illusion took the researchers by surprise:
“We took the same tram up and down for hundreds of trips, and the illusion did not reduce a bit,” says Tseng. “This suggests that our experiences and our learned knowledge about the world — that buildings should be vertical — are not enough to cancel our brain’s wrong conclusion.”
People can plan strategic movements to several different targets at the same time, even when they see far fewer targets than are actually present, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.

A team of researchers at the Brain and Mind Institute at the University of Western Ontario took advantage of a pictorial illusion — known as the “connectedness illusion” — that causes people to underestimate the number of targets they see.
When people act on these targets, however, they can rapidly plan accurate and strategic reaches that reflect the actual number of targets.
Using sophisticated statistical techniques to analyze participants’ responses to multiple potential targets, the researchers found that participants’ reaches to the targets were unaffected by the presence of the connecting lines.
Thus, the “connectedness illusion” seemed to influence the number of targets they perceived but did not impact their ability to plan actions related to the targets.
These findings indicate that the processes in the brain that plan visually guided actions are distinct from those that allow us to perceive the world.
“The design of the experiments allowed us to separate these two processes, even though they normally unfold at the same time,” explained lead researcher Jennifer Milne, a PhD student at the University of Western Ontario.
“It’s as though we have a semi-autonomous robot in our brain that plans and executes actions on our behalf with only the broadest of instructions from us!”
According to Mel Goodale, professor at the University of Western Ontario and senior author on the paper, these findings “not only reveal just how sophisticated the visuomotor systems in the brain are, but could also have important implications for the design and implementation of robotic systems and efficient human-machine interfaces.”
One in four people who survive a stroke or transient ischemic attack (TIA) suffer from symptoms of post-traumatic stress disorder (PTSD) within the first year post-event, and one in nine experience chronic PTSD more than a year later. The data suggest that each year nearly 300,000 stroke/TIA survivors will develop PTSD symptoms as a result of their health scare. The study, led by Columbia University Medical Center researchers, was published today in the online edition of PLOS ONE.

“This work builds on recent findings of ours that PTSD is common among heart attack survivors and that it contributes to a doubled risk of a future cardiac event or of dying within one to three years. Our current results show that PTSD in stroke and TIA survivors may increase their risk for recurrent stroke and other cardiovascular events,” said first author Donald Edmondson, PhD, MPH, assistant professor of behavioral medicine (Center for Behavioral Cardiovascular Health) at CUMC. “Given that each event is life-threatening and that strokes/TIAs add hundreds of millions of dollars to annual health expenditures, these findings are important to both the long-term survival and health costs of these patient populations.”
“PTSD is not just a disorder of combat veterans and sexual assault survivors, but strongly affects survivors of stroke and other potentially traumatic acute cardiovascular events as well,” said Ian M. Kronish, MD, MPH, assistant professor of medicine (Center for Behavioral Cardiovascular Health) and the study’s senior author. “Surviving a life-threatening health scare can have a debilitating psychological impact, and health care providers should make it a priority to screen for symptoms of depression, anxiety, and PTSD among these patient populations.”
Stroke is the fourth-leading cause of death and the top cause of disability in the United States. According to data from the American Stroke Association, nearly 795,000 Americans each year suffer a new or recurrent stroke, and up to an additional 500,000 suffer a TIA.
PTSD is an anxiety disorder initiated by exposure to a traumatic event. Common symptoms include nightmares, avoidance of reminders of the event, and elevated heart rate and blood pressure. Chronic PTSD is a duration of these symptoms for three months or longer (as defined by the DSM-IV).
Since only a few studies have assessed PTSD due to stroke, Drs. Edmondson and Kronish and their colleagues performed the first meta-analysis of clinical studies of stroke- or TIA-induced PTSD. The nine studies in the meta-analysis included a total of 1,138 stroke or TIA survivors.
The study found that 23 percent, or roughly one in four, of the patients developed PTSD symptoms within the first year after their stroke or TIA, with 11 percent, or roughly one in nine, experiencing chronic PTSD more than a year later.
“PTSD and other psychological disorders in stroke and TIA patients appear to be an under-recognized and undertreated problem,” said Dr. Kronish.
“Fortunately, there are good treatments for PTSD,” said Dr. Edmondson. “But first, physicians and patients have to be aware that this is a problem. Family members can also help. We know that social support is a good protective factor against PTSD due to any type of traumatic event.”
“The next step is further research to assess whether mental health treatment can reduce stroke- and TIA-induced PTSD symptoms and help these patients regain a feeling of normalcy and calm as soon as possible after their health scare,” said Dr. Edmondson.
(Source: newsroom.cumc.columbia.edu)

Researchers Identify Emotions Based on Brain Activity
For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.
The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.
Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”
“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”
One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.
“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.
For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.
Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.
To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.
Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs. The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.
Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.
“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.
“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.
The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.
Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”
In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.
IQ link to baby’s weight gain in first month
New research from the University of Adelaide shows that weight gain and increased head size in the first month of a baby’s life is linked to a higher IQ at early school age.
The study was led by University of Adelaide Public Health researchers, who analysed data from more than 13,800 children who were born full-term.
The results, published today in the international journal Pediatrics, show that babies who put on 40% of their birthweight in the first four weeks had an IQ 1.5 points higher by the time they were six years of age, compared with babies who only put on 15% of their birthweight.
Those with the biggest growth in head circumference also had the highest IQs.
"Head circumference is an indicator of brain volume, so a greater increase in head circumference in a newborn baby suggests more rapid brain growth," says the lead author of the study, Dr Lisa Smithers from the University of Adelaide’s School of Population Health.
"Overall, newborn children who grew faster in the first four weeks had higher IQ scores later in life," she says.
"Those children who gained the most weight scored especially high on verbal IQ at age 6. This may be because the neural structures for verbal IQ develop earlier in life, which means the rapid weight gain during that neonatal period could be having a direct cognitive benefit for the child."
Previous studies have shown the association between early postnatal diet and IQ, but this is the first study of its kind to focus on the IQ benefits of rapid weight gain in the first month of life for healthy newborn babies.
Dr Smithers says the study further highlights the need for successful feeding of newborn babies.
"We know that many mothers have difficulty establishing breastfeeding in the first weeks of their baby’s life," Dr Smithers says.
"The findings of our study suggest that if infants are having feeding problems, there needs to be early intervention in the management of that feeding."
(Image: thebabypicz.com)

It’s the way you tell em’: Study discovers how the brain controls accents and impersonations
A study, led by Royal Holloway University researcher Carolyn McGettigan, has identified the brain regions and interactions involved in impersonations and accents.
Using an fMRI scanner, the team asked participants, all non-professional impressionists, to repeatedly recite the opening lines of a familiar nursery rhyme either with their normal voice, by impersonating individuals, or by impersonating regional and foreign accents of English.
They found that when a voice is deliberately changed, it brings the left anterior insula and inferior frontal gyrus (LIFG) of the brain into play. The researchers also discovered that when comparing impersonations against accents, areas in the posterior superior temporal/inferior parietal cortex and in the right middle/anterior superior temporal sulcus showed greater responses.
“The voice is a powerful channel for the expression of our identity – it conveys information such as gender, age and place of birth, but crucially, it also expresses who we want to be,” said lead author Carolyn McGettigan from the Department of Psychology at Royal Holloway.
“Consider the difference between talking to a friend on the phone, talking to a police officer who’s cautioning you for parking violation, or speaking to a young infant. While the words we use might be different across these settings, another dramatic difference is the tone and style with which we deliver the words we say. We wanted to find out more about this process and how the brain controls it.”
While past work has found that listening to voices activates regions of the temporal lobe of the brain, no research had explored the brain regions involved in controlling vocal identity before this study.
“Our aim is to find out more about how the brain controls this very flexible communicative tool, which could potentially lead to new treatments for those looking to recover their own vocal identity following brain injury or a stroke, ” said Carolyn.