Posts tagged psychology

Posts tagged psychology
Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep
A new study that examined how the brain learns and retains motor skills provides insight into musical skill.
Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.
The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.
The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.
“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”
Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.
“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”
The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.
The findings may in the future guide the teaching of music, Allen said.
“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”
The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.
New study builds on earlier brain research in rats and humans
Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.
Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.
Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.
“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”
Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.
Impact of sleep on learning for musicians
For Allen’s study, 60 undergraduate and graduate music majors participated in the research.
Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.
The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.
Musicians who learned a single melody showed performance gains on the test the next day.
Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.
Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.
“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen
Children of Blind Mothers Learn New Modes of Communication
A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.
Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.
To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.
Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.
The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.
When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.
Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”
"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."
Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”
Why do some memories last a lifetime while others disappear quickly?

(Image: Tim Vernon, LTH NHS TRUST/SCIENCE PHOTO LIBRARY)
A new study suggests that memories rehearsed, during either sleep or waking, can have an impact on memory consolidation and on what is remembered later.
The new Northwestern University study shows that when the information that makes up a memory has a high value (associated with, for example, making more money), the memory is more likely to be rehearsed and consolidated during sleep and, thus, be remembered later.
Also, through the use of a direct manipulation of sleep, the research demonstrated a way to encourage the reactivation of low-value memories so they too were remembered later.
Delphine Oudiette, a postdoctoral fellow in the department of psychology at Northwestern and lead author of the study, designed the experiment to study how participants remembered locations of objects on a computer screen. A value assigned to each object informed participants how much money they could make if they remembered it later on the test.
"The pay-off was much higher for some of the objects than for others," explained Ken Paller, professor of psychology at Northwestern and co-author of the study. "In other words, we manipulated the value of the memories — some were valuable memories and others not so much, just as the things we experience each day vary in the extent to which we’d like to be able to remember them later."
When each object was shown, it was accompanied by a characteristic sound. For example, a tea kettle would appear with a whistling sound. During both states of wakefulness and sleep, some of the sounds were played alone, quite softly, essentially reminding participants of the low-value items.
Participants remembered the low-value associations better when the sound presentations occurred during sleep.
"We think that what’s happening during sleep is basically the reactivation of that information," Oudiette said. "We can provoke the reactivation by presenting those sounds, therefore energizing the low-value memories so they get stored better."
The research poses provocative implications about the role memory reactivation during sleep could play in improving memory storage,” said Paller, director of the Cognitive Neuroscience Program at Northwestern. “Whatever makes you rehearse during sleep is going to determine what you remember later, and conversely, what you’re going to forget.”
Many memories that are stored during the day are not remembered.
"We think one of the reasons for that is that we have to rehearse memories in order to keep them. When you practice and rehearse, you increase the likelihood of later remembering," Oudiette said. "And a lot of our rehearsal happens when we don’t even realize it — while we’re asleep."
Paller said selectivity of memory consolidation is not well understood. Most efforts in memory research have focused on what happens when you first form a memory and on what happens when you retrieve a memory.
"The in-between time is what we want to learn more about, because a fascinating aspect of memory storage is that it is not static," Paller said. "Memories in our brain are changing all of the time. Sometimes you improve memory storage by rehearsing all the details, so maybe later you remember better — or maybe worse if you’ve embellished too much.
"The fact that this critical memory reactivation transpires during sleep has mostly been hidden from us, from humanity, because we don’t realize so much of what’s happening while we’re asleep," he said.
(Source: eurekalert.org)
People often think that other people are staring at them even when they aren’t, vision scientists have found.
In a new article in Current Biology, researchers at The Vision Centre reveal that, when in doubt, the human brain is more likely to tell its owner that they’re under the gaze of another person.
“Gaze perception – the ability to tell what a person is looking at – is a social cue that people often take for granted,” says Professor Colin Clifford of The Vision Centre and The University of Sydney.
“Judging if others are looking at us may come naturally, but it’s actually not that simple – our brains have to do a lot of work behind the scenes.”
To tell if they’re under someone’s gaze, people look at the position of the other person’s eyes and the direction of their heads, Prof. Clifford explains. These visual cues are then sent to the brain where there are specific areas that compute this information.
However, the brain doesn’t just passively receive information from the eyes, Prof. Clifford says. The new study shows that when people have limited visual cues, such as in dark conditions or when the other person is wearing sunglasses, the brain takes over with what it ‘knows’.
In their study, the Vision Centre researchers created images of faces and asked people to observe where the faces were looking.
“We made it difficult for the observers to see where the eyes were pointed so they would have to rely on their prior knowledge to judge the faces’ direction of gaze,” Prof. Clifford explains. “It turns out that we’re hard-wired to believe that others are staring at us, especially when we’re uncertain.
“So gaze perception doesn’t only involve visual cues – our brains generate assumptions from our experiences and match them with what we see at a particular moment.”
There are several speculations to why humans have this bias, Prof. Clifford says. “Direct gaze can signal dominance or a threat, and if you perceive something as a threat, you would not want to miss it. So assuming that the other person is looking at you may simply be a safer strategy.”
“Also, direct gaze is often a social cue that the other person wants to communicate with us, so it’s a signal for an upcoming interaction.”
There is also evidence that babies have a preference for direct gaze, which suggests that this bias is innate, Prof. Clifford says. “It’s important that we find out whether it’s innate or learned – and how this might affect people with certain mental conditions.
“Research has shown, for example, that people who have autism are less able to tell whether someone is looking at them. People with social anxiety, on the other hand, have a higher tendency to think that they are under the stare of others.
“So if it is a learned behaviour, we could help them practice this task – one possibility is letting them observe a lot of faces with different eyes and head directions, and giving them feedback on whether their observations are accurate.”
Do the brains of different people listening to the same piece of music actually respond in the same way? An imaging study by Stanford University School of Medicine scientists says the answer is yes, which may in part explain why music plays such a big role in our social existence.

(Image: Anthony Ellis)
The investigators used functional magnetic resonance imaging to identify a distributed network of several brain structures whose activity levels waxed and waned in a strikingly similar pattern among study participants as they listened to classical music they’d never heard before. The results will be published online April 11 in the European Journal of Neuroscience.
"We spend a lot of time listening to music — often in groups, and often in conjunction with synchronized movement and dance," said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study’s senior author. "Here, we’ve shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention."
The notion that healthy subjects respond to complex sounds in the same way, Menon said, could provide novel insights into how individuals with language and speech disorders might listen to and track information differently from the rest of us.
The new study is one in a series of collaborations between Menon and co-author Daniel Levitin, PhD, a psychology professor at McGill University in Montreal, dating back to when Levitin was a visiting scholar at Stanford several years ago.
To make sure it was music, not language, that study participants’ brains would be processing, Menon’s group used music that had no lyrics. Also excluded was anything participants had heard before, in order to eliminate the confounding effects of having some participants who had heard the musical selection before while others were hearing it for the first time. Using obscure pieces of music also avoided tripping off memories such as where participants were the first time they heard the selection.
The researchers settled on complete classical symphonic musical pieces by 18th-century English composer William Boyce, known to musical cognoscenti as “the English Bach” because his late-baroque compositions in some respects resembled those of the famed German composer. Boyce’s works fit well into the canon of Western music but are little known to modern Americans.
Next, Menon’s group recruited 17 right-handed participants (nine men and eight women) between the ages of 19 and 27 with little or no musical training and no previous knowledge of Boyce’s works. (Conventional maps of brain anatomy are based on studies of right-handed people. Left-handed people’s brains tend to deviate from that map.)
While participants listened to Boyce’s music through headphones with their heads maintained in a fixed position inside an fMRI chamber, their brains were imaged for more than nine minutes. During this imaging session, participants also heard two types of “pseudo-musical” stimuli containing one or another attribute of music but lacking in others. In one case, all of the timing information in the music was obliterated, including the rhythm, with an effect akin to a harmonized hissing sound. The other pseudo-musical input involved maintaining the same rhythmic structure as in the Boyce piece but with each tone transformed by a mathematical algorithm to another tone so that the melodic and harmonic aspects were drastically altered.
The team identified a hierarchal network stretching from low-level auditory relay stations in the midbrain to high-level cortical brain structures related to working memory and attention, and beyond that to movement-planning areas in the cortex. These regions track structural elements of a musical stimulus over time periods lasting up to several seconds, with each region processing information according to its own time scale.
Activity levels in several different places in the brain responded similarly from one individual to the next to music, but less so or not at all to pseudo-music. While these brain structures have been implicated individually in musical processing, their identifications had been obtained by probing with artificial laboratory stimuli, not real music. Nor had their coordination with one another been previously observed.
Notably, subcortical auditory structures in the midbrain and thalamus showed significantly greater synchronization in response to musical stimuli. These structures have been thought to passively relay auditory information to higher brain centers, Menon said. “But if they were just passive relay stations, their responses to both types of pseudo-music would have been just as closely synchronized between individuals as to real music.” The study demonstrated, for the first time, that those structures’ activity levels respond preferentially to music rather than to pseudo-music, suggesting that higher-level centers in the cortex direct these relay stations to closely heed sounds that are specifically musical in nature.
The fronto-parietal cortex, which anchors high-level cognitive functions including attention and working memory, also manifested intersubject synchronization — but only in response to music and only in the right hemisphere.
Interestingly, the structures involved included the right-brain counterparts of two important structures in the brain’s left hemisphere, Broca’s and Geschwind’s areas, known to be crucial for speech and language interpretation.
"These right-hemisphere brain areas track non-linguistic stimuli such as music in the same way that the left hemisphere tracks linguistic sequences," said Menon.
In any single individual listening to music, each cluster of music-responsive areas appeared to be tracking music on its own time scale. For example, midbrain auditory processing centers worked more or less in real time, while the right-brain analogs of the Broca’s and Geschwind’s areas appeared to chew on longer stretches of music. These structures may be necessary for holding musical phrases and passages in mind as part of making sense of a piece of music’s long-term structure.
"A novelty of our work is that we identified brain structures that track the temporal evolution of the music over extended periods of time, similar to our everyday experience of music listening," said postdoctoral scholar Daniel Abrams, PhD, the study’s first author.
The preferential activation of motor-planning centers in response to music, compared with pseudo-music, suggests that our brains respond naturally to musical stimulation by foreshadowing movements that typically accompany music listening: clapping, dancing, marching, singing or head-bobbing. The apparently similar activation patterns among normal individuals make it more likely our movements will be socially coordinated.
"Our method can be extended to a number of research domains that involve interpersonal communication. We are particularly interested in language and social communication in autism," Menon said. "Do children with autism listen to speech the same way as typically developing children? If not, how are they processing information differently? Which brain regions are out of sync?"
(Source: eurekalert.org)
Subconscious mental categories help brain sort through everyday experiences
Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.
But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).
Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.
So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.
This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.
"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."
Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.
The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.
"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.
"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."
During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.
After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.
The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.
The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.
The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.
"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."
Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.
"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."
The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.
"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."
The age at which a child with autism is diagnosed is related to the particular suite of behavioral symptoms he or she exhibits, new research from the University of Wisconsin-Madison shows.
Certain diagnostic features, including poor nonverbal communication and repetitive behaviors, were associated with earlier identification of an autism spectrum disorder, according to a study in the April issue of the Journal of the American Academy of Child and Adolescent Psychiatry. Displaying more behavioral features was also associated with earlier diagnosis.
"Early diagnosis is one of the major public health goals related to autism," says lead study author Matthew Maenner, a researcher at the UW-Madison Waisman Center. "The earlier you can identify that a child might be having problems, the sooner they can receive support to help them succeed and reach their potential."
But there is a large gap between current research and what is actually happening in schools and communities, Maenner adds. Although research suggests autism can be reliably diagnosed by age 2, the new analysis shows that fewer than half of children with autism are identified in their communities by age 5.
One challenge is that autism spectrum disorders (ASD) are extremely diverse. According to the criteria outlined in the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition - Text Revision (DSM-IV-TR), the standard handbook used for classification of psychiatric disorders, there are more than 600 different symptom combinations that meet the minimum criteria for diagnosing autistic disorder, one subtype of ASD.
Previous research on age at diagnosis has focused on external factors such as gender, socioeconomic status, and intellectual disability. Maenner and his colleagues instead looked at patterns of the 12 behavioral features used to diagnose autism according to the DSM-IV-TR.
He and Maureen Durkin, a UW-Madison professor of population health and pediatrics and Waisman Center investigator, studied records of 2,757 8-year- olds from 11 surveillance sites in the nationwide Autism and Developmental Disabilities Monitoring Network, run by the Centers for Disease Control and Prevention (CDC). They found significant associations between the presence of certain behavioral features and age at diagnosis.
"When it comes to the timing of autism identification, the symptoms actually matter quite a bit," Maenner says.
In the study population, the median age at diagnosis (the age by which half the children were diagnosed) was 8.2 years for children with only seven of the listed behavioral features but dropped to just 3.8 years for children with all 12 of the symptoms.
The specific symptoms present also emerged as an important factor. Children with impairments in nonverbal communication, imaginary play, repetitive motor behaviors, and inflexibility in routines were more likely to be diagnosed at a younger age, while those with deficits in conversational ability, idiosyncratic speech and relating to peers were more likely to be diagnosed at a later age.
These patterns make a lot of sense, Maenner says, since they involve behaviors that may arise at different developmental times. The findings suggest that children who show fewer behavioral features or whose autism is characterized by symptoms typically identified at later ages may face more barriers to early diagnosis.
But they also indicate that more screening may not always lead to early diagnoses for everyone.
"Increasing the intensity of screening for autism might lead to identifying more children earlier, but it could also catch a lot of people at later ages who might not have otherwise been identified as having autism," Maenner says.
(Source: news.wisc.edu)
Most people are so attuned to the nuances of social interaction that they can detect clues to mental illness while playing a strategy game with someone they have never met.

That was the finding of a team of scientists led by Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute. The researchers discovered that healthy people and those with borderline personality disorder displayed different patterns of behavior while playing an online strategy game, so much so that when healthy players played people with borderline personality disorder, they gave up on trying to predict what their partners would do next.
For their large neuroimaging study, the scientists used a multiround social interaction game, the investor-trustee game, to study the level of strategic thinking in 195 pairs of subjects. In each pair, one player played the investor and the other the trustee. The investor chose how much money to send the trustee, and the trustee in turn decided how much to return to the investor. Profit required the cooperation of both players.
“This classic tit-for-tat game allows us to probe people’s responses to the social gestures of others,” said Montague, who also directs the Computational Psychiatry Unit, an academic center that uses computational models to understand mental disease. “It further allows us to see how people form models of one another. These insights are important for understanding a range of mental illnesses, as the ability to infer other people’s intentions is an essential component of healthy cognition.”
The scientists classified the investors according to varying levels of strategic depth of thought. The healthy subjects fell into three categories: about half simply responded to the amount the other player sent; about one-quarter built a model of their partner’s behavior; and the remaining quarter considered not just their model of their partner, but also their partner’s models of them.
Not surprisingly, the depth-of-thought style of play correlated with success, with the players who looked deeper into interactions making considerably more money than those who played at a shallow level.
When healthy subjects played people with borderline personality disorder, though, they were far less likely to exhibit depth of thought.
“People with borderline personality disorder are characterized by their unstable relationships, and when they play this game, they tend to break cooperation,” said Montague. “The healthy subjects picked up on the erratic behavior, likely without even realizing it, and far fewer played strategically.”
Notably, the functional magnetic resonance imaging of the subjects’ brains revealed that each category of player showed distinct neural correlates of learning signals associated with differing depths of thought. The scientists used hyperscanning, a technique Montague invented that enables subjects in different brain scanners to interact in real time, regardless of geography. Hyperscanning allows scientists to eavesdrop on brain activity during social exchanges in scanners, whether across the hallway or across the world.
“We’re always modeling other people, and our brains have a substantial amount of neural tissue devoted to pondering our interactions with other people,” Montague said. “This study is a start to turning neural signals into numbers – not just theory-of-mind arguments, but actual numbers. And when we can do that across thousands of people, we should start to gain insights into psychopathologies – what circuits are involved, what brain regions are engaged, and how injuries, congenital disorders, and genetic defects might play into psychiatric illness.”
Montague believes the study represents a significant contribution to the field of computational psychiatry, which seeks to bring computational clout to efforts to understand mental dysfunction. “Traditional psychiatric categories are useful yet incomplete,” said Montague, who delivered a TEDGlobal talk on the growing field of computational psychiatry last year. “Computational psychiatry enables us to redefine with a new lexicon – a mathematical one – the standard ways we think about mental illness.”
Computationally based insights may one day help psychiatry achieve better precision in diagnosis and treatment, Montague said. But until scientists have the right instruments, they cannot even begin to make those connections.
“The exquisite sensitivity that most people have to social gestures gives us a valuable opening,” Montague said. “We’re hoping to invent a tool – almost a human inkblot test – for identifying and characterizing mental disorders in which social interactions go awry.”
(Source: vtnews.vt.edu)
Reframing Stress: Stage Fright Can Be Your Friend
Fear of public speaking tops death and spiders as the nation’s number one phobia. But new research shows that learning to rethink the way we view our shaky hands, pounding heart, and sweaty palms can help people perform better both mentally and physically.
Before a stressful speaking task, simply encouraging people to reframe the meaning of these signs of stress as natural and helpful was a surprisingly effective way of handling stage fright, found the study to be published online April 8 in Clinical Psychological Science.
"The problem is that we think all stress is bad," explains Jeremy Jamieson, the lead author on the study and an assistant professor of psychology at the University of Rochester. "We see headlines about ‘Killer Stress’ and talk about being ‘stressed out.’" Before speaking in public, people often interpret stress sensations, like butterflies in the stomach, as a warning that something bad is about to happen, he says.
"But those feelings just mean that our body is preparing to address a demanding situation," explains Jamieson. "The body is marshaling resources, pumping more blood to our major muscle groups and delivering more oxygen to our brains." Our body’s reaction to social stress is the same flight or fight response we produce when confronting physical danger. These physiological responses help us perform, whether we’re facing a bear in the forest or a critical audience.
For many people, especially those suffering from social anxiety disorder, the natural uneasiness experienced before giving a speech can quickly tip over into panic. “If we think we can’t cope with stress, we will experience threat. When threatened, the body enacts changes to concentrate blood in the core and restricts flow to the arms, legs, and brain,” he explains. So, “cold feet” is a real physiological response to threat, not just a colorful expression.
"Lots of current advice for anxious people focuses on learning to ‘relax,’—you know, deep, even breathing and similar tips," says Jamieson. Such calming techniques, write the authors, may be helpful in situations that do not require peak performance. But when gearing up for a high-stakes exam, a job interview, or, yes, a speaking engagement, reframing how we think about stress may be a better strategy.
Then how can people reap the benefits of being stressed without being overwhelmed by dread? To answer that question, Jamieson and co-authors Matthew Nock, of Harvard University and Wendy Berry Mendes of the University of California in San Francisco, turned to the Trier Social Stress Test. Developed in 1993 by Clemens Kirschbaum and colleagues, this experiment relies on fear of public speaking and has become one of the most reliable laboratory methods for eliciting threat responses.
In the study, 69 adults were asked to give a five-minute talk about their strengths and weaknesses with only three minutes to prepare. Roughly half of the participants had a history of social anxiety and all participants were randomly assigned to two groups. The first group was presented information about the advantages of the body’s stress response and encouraged to “reinterpret your bodily signals during the upcoming public speaking task as beneficial.” That group also was asked to read summaries of three psychology studies that showed the benefits of stress. The second group received no information about reframing stress.
Participants delivered their speech to two judges. On purpose, the judges provided negative nonverbal feedback throughout the entire five-minute presentations, shaking their heads in disapproval, tapping on their clipboards, and staring stone-faced ahead. If study subjects ran out of things to say, the judges insisted that they continue speaking for the full five minutes. Following the speech, participants were asked to count backwards for five minutes in steps of seven beginning with the number 996. The evaluators again provided negative feedback throughout and insisted that participants start over if they made any mistakes.
Confronted with scowling judges, participants who received no stress preparation experienced a threat response, as captured by cardiovascular measures. But the group that was prepped about the benefits of stress weathered the trial better. That group reported feeling that they had more resources to cope with the public speaking task and, perhaps more tellingly, their physiological responses confirmed those perceptions. The prepped group pumped more blood through the body per minute compared to the group that did not receive instruction.
Surprisingly, this study also found that individuals who suffer from social anxiety disorder actually experienced no greater increase in physiological arousal while under scrutiny than their non-anxious counterparts, despite reporting more intense feelings of apprehension. This disconnect, argue the authors, supports the theory that our experience of acute or short-term stress is shaped by how we interpret physical cues. “We construct our own emotions,” says Jamieson.