Posts tagged psychology
Posts tagged psychology
Scientists have long suspected that corvids – the family of birds including ravens, crows and magpies – are highly intelligent. Now, Tübingen neurobiologists Lena Veit und Professor Andreas Nieder have demonstrated how the brains of crows produce intelligent behavior when the birds have to make strategic decisions. Their results are published in the latest edition of Nature Communications.
Crows are no bird-brains. Behavioral biologists have even called them “feathered primates” because the birds make and use tools, are able to remember large numbers of feeding sites, and plan their social behavior according to what other members of their group do. This high level of intelligence might seem surprising because birds’ brains are constructed in a fundamentally different way from those of mammals, including primates – which are usually used to investigate these behaviors.
The Tübingen researchers are the first to investigate the brain physiology of crows’ intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage – and which is an effort even for humans.
The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image – while another group of cells always responded when they were operating on the “different image” rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.
The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. “Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs,” says Lena Veit. “This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy.” Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. “Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains,” says Professor Andreas Nieder.
With right-handed people, it is positioned in the left side of the brain; left-handed people have it (usually) in the right side: the location of speech production has been known for quite some time. But it is not that simple, states psychologist Gesa Hartwigsen, Professor at Kiel University. In her current scientific publication, published in the magazine Proceedings of the National Academy of Science of the USA (PNAS), she investigates which areas in the brain really are in charge of speech, and how these interact. Her findings are supposed to help patients who have speech production problems or aphasia following a stroke.
Comprehending & Speaking
Gesa Hartwigsen and her team started by analysing speech production. They let healthy right-handed test persons listen to words, which they should then repeat. “These were pseudo words such as `beudo`. In German, they don’t have any associated meaning. Therefore, when hearing and repeating these words, no areas of the brain that had a connection to the meaning of what had been heard were activated”, said Hartwigsen.
The psychologist applies a combination of non-invasive methods (fMRI– functional magnetic resonance imaging and TMS – transcranial magnetic stimulation) to deduce what happens in the brain during the test. “We thus proved that the left hemisphere, as expected, was activated during speech production, while the right hemisphere did not actively contribute to language function”, explains Hartwigsen. This is the regular functionality within a healthy brain. From these results as well as others, scientists had up to now deduced that the right hemisphere did not contribute to speech production in the healthy system and was therefore suppressed.
Interfering & Measuring
With a second test, the Kiel University scientists simulated a dysfunction in the brain comparable to a stroke. A magnetic coil transmits a current pulse that interrupts the function of the area responsible for producing speech (Broca’s Area) in the left hemisphere. This completely harmless method influences the speech production of the volunteers for about 30 to 45 minutes. “During this period, the ability to listen and repeat was tested again. While we observed a suppressed activity in the left hemisphere during repeating, with some test persons taking longer to repeat the pseudo words, we also found unexpected activities in the right hemisphere”, reports Hartwigsen.
The right hemisphere showed increased activity during pseudo word repetition. The more the activity in the right Borca’s Area increased, the faster the volunteers were able to solve their speech tests. The right hemisphere also increased its facilitatory influence on the right hemisphere, a finding that was not observed prior to the TMS-induced lesion. “This reaction lends further support to the notion that the right hemisphere area reacts to the dysfunction of the left hemisphere and tries to compensate for the lesion.” Does the right hemisphere have a supporting influence and does it play an active role in speech production? So far, the common opinion was that it does not.
Result & Outlook
The findings of Gesa Hartwigsen and her team show an interaction of both hemispheres during speech repetition. When the left hemisphere is suppressed for example by a stroke, the right hemisphere could actively facilitate speech production. “By stimulating the right hemisphere, it could be possible to support speech recovery”, speculates the scientist. Here, timing would be very important. “Right after a stroke, we could support the right hemisphere. But when the remaining areas of the left hemisphere are ready to do their work again, it might be more helpful if the right hemisphere was suppressed. During this phase, we could stimulate the left hemisphere instead. The correct timing can therefore be crucial for recovery of speech after a stroke.”
In collaboration with the Department of Neurology at Kiel University, a stroke specialist from Leipzig and doctoral students of Medicine and Psychology, Gesa Hartwigsen has started a follow-up study on the recent publication. “We would like to find out more about the collaboration of the hemispheres and the right timing in helping stroke patients to recover”, says Hartwigsen. Her field of research is fairly new within the cognitive neuroscience. Nevertheless, she is positive that it will offer practical help in the form of concrete therapies within the next ten to fifteen years.
A Michigan State University researcher has discovered the first anatomical evidence that the brains of children with a nonverbal learning disability – long considered a “pseudo” diagnosis – may develop differently than the brains of other children.
The finding, published in Child Neuropsychology, could ultimately help educators and clinicians better distinguish between – and treat – children with a nonverbal learning disability, or NLVD, and those with Asperger’s, or high functioning autism, which is often confused with NLVD.
“Children with nonverbal learning disabilities and Asperger’s can look very similar, but they can have very different reasons for why they behave the way they do,” said Jodene Fine, assistant professor of school psychology in MSU’s College of Education.
Understanding the biological differences in children with learning and behavioral challenges could help lead to more appropriate intervention strategies.
Children with nonverbal learning disability tend to have normal language skills but below average math skills and difficulty solving visual puzzles. Because many of these kids also show difficulty understanding social cues, some experts have argued that NVLD is related to high functioning autism – which this latest study suggests may not be so.
Fine and Kayla Musielak, an MSU doctoral student in school psychology, studied about 150 children ages 8 to 18. Using MRI scans of the participants’ brains, the researchers found that the children diagnosed with NVLD had smaller spleniums than children with other learning disorders such as Asperger’s and ADHD, and children who had no learning disorders.
The splenium is part of the corpus callosum, a thick band of fibers in the brain that connects the left and right hemispheres and facilitates communication between the two sides. Interestingly, this posterior part of the corpus callosum serves the areas of the brain related to visual and spatial functioning.
In a second part of the study, the participants’ brain activity was analyzed after they were shown videos in an MRI that portrayed both positive and negative examples of social interaction. (A typical example of a positive event was a child opening a desired birthday present with friend; a negative event included a child being teased by other children.)
The researchers found that the brains of children with nonverbal learning disability responded differently to the social interactions than the brains of children with high functioning autism, or HFA, suggesting the neural pathways that underlie those behaviors may be different.
“So what we have is evidence of a structural difference in the brains of children with NLVD and HFA, as well as evidence of a functional difference in the way their brains behave when they are presented with stimuli,” Fine said.
While more research is needed to better understand how nonverbal learning disability fits into the family of learning disorders, Fine said her findings present “an interesting piece of the puzzle.”
“I would say at this point we still don’t have enough evidence to say NVLD is a distinct diagnosis, but I do think our research supports the idea that it might be,” she said.
People who can accurately remember details of their daily lives going back decades are as susceptible as everyone else to forming fake memories, UC Irvine psychologists and neurobiologists have found.
In a series of tests to determine how false information can manipulate memory formation, the researchers discovered that subjects with highly superior autobiographical memory logged scores similar to those of a control group of subjects with average memory.
“Finding susceptibility to false memories even in people with very strong memory could be important for dissemination to people who are not memory experts. For example, it could help communicate how widespread our basic susceptibility to memory distortions is,” said Lawrence Patihis, a graduate student in psychology & social behavior at UC Irvine. “This dissemination could help prevent false memories in the legal and clinical psychology fields, where contamination of memory has had particularly important consequences in the past.”
Patihis works in the research group of world-renowned psychologist Elizabeth Loftus, who pioneered the study of false memories and their implications.
Persons with highly superior autobiographical memory (HSAM, also known as hyperthymesia) – which was first identified in 2006 by scientists at UC Irvine’s Center for the Neurobiology of Learning & Memory – have the astounding ability to remember even trivial details from their distant past. This includes recalling daily activities of their life since mid-childhood with almost 100 percent accuracy.
The lead researcher on the study, Patihis believes it’s the first effort to test malleable reconstructive memory in HSAM individuals.
Working with neurobiology & behavior graduate student Aurora LePort, Patihis asked 20 people with superior memory and 38 people with average memory to do word association exercises, recall details of photographs depicting a crime, and discuss their recollections of video footage of the United Flight 93 crash on 9/11. (Such footage does not exist.) These tasks incorporated misinformation in an attempt to manipulate what the subjects thought they had remembered.
“While they really do have super-autobiographical memory, it can be as malleable as anybody else’s, depending on whether misinformation was introduced and how it was processed,” Patihis said. “It’s a fascinating paradox. In the absence of misinformation, they have what appears to be almost perfect, detailed autobiographical memory, but they are vulnerable to distortions, as anyone else is.”
He noted that there are still many mysteries about people with highly superior autobiographical memory that need further investigation. LePort, for instance, is studying forgetting curves (which involve how many autobiographical details people can remember from one day ago, one week ago, one month ago, etc., and how the number of details decreases over time) in both HSAM and control participants and will employ functional MRI to better understand the phenomenon.
“What I love about the study is how it communicates something that memory distortion researchers have suspected for some time: that perhaps no one is immune to memory distortion,” Patihis said. “It will probably make some nonexperts realize, finally, that if even memory prodigies are susceptible, then they probably are too. This teachable moment is almost as important as the scientific merit of the study. It could help educate people – including those who deal with memory evidence, such as clinical psychologists and legal professionals – about false memories.”
The study appears this week in the early online version of Proceedings of the National Academy of Sciences.
The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.
In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.
“For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”
Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.
The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”
As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.
Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.
In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.
In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.
In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.
Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.
Consumption of caffeine, even six hours before bedtime, can have significant, disruptive effects on sleep. The study, from the American Academy of Sleep Medicine, was published in the Journal of Clinical Sleep Medicine.
“Sleep specialists have always suspected that caffeine can disrupt sleep long after it is consumed,” said American Academy of Sleep Medicine President M. Safwan Badr, MD. “This study provides objective evidence supporting the general recommendation that avoiding caffeine in the late afternoon and at night is beneficial for sleep.”
The researchers found that 400 mg of caffeine (about 2-3 cups of coffee) taken at bedtime, or three to six hours before bedtime, significantly impacts sleep. Objectively measured total sleep time was reduced by more than an hour even when the caffeine was consumed six hours before going to bed. Subjective reports, however, suggest that the study participants were unaware of this sleep disturbance.
“Drinking a big cup of coffee on the way home from work can lead to negative effects on sleep just as if someone were to consume caffeine closer to bedtime,” said Christopher Drake, PhD, investigator at the Henry Ford Sleep Disorders and Research Center and associate professor of psychiatry and behavioral neurosciences at Wayne State University.
People tend to be less likely to detect the disruptive effects of caffeine on sleep when taken in the afternoon,” noted Drake, who is also on the board of directors of the Sleep Research Society.
The researchers recruited 12 healthy normal sleepers, as determined by a physical examination and clinical interview. Subjects were instructed to maintain their normal sleep schedule, but were given three pills a day for four days to be taken at six, three and zero hours before scheduled bedtime. Two of the pills were placebos, and one was 400 mg of caffeine. On one of the four days, all three of the participants’ pills were a placebo. The researchers measured sleep disturbance subjectively using a standard sleep diary and objectively using an in-home sleep monitor.
This is the first study to investigate the effects of a given dose of caffeine taken at different times before sleep. The findings suggest that, in order to allow healthy sleep, individuals should avoid caffeine after 5pm.
Many of us have steeled ourselves for those ‘needle in a haystack’ tasks of finding our vehicle in an airport car park, or scouring the supermarket shelves for a favourite brand.
A new scientific study has revealed that our understanding of how the human brain prepares to perform visual search tasks of varying difficulty may now need to be revised.
When people search for a specific object, they tend to hold in mind a visual representation of it, based on key attributes like shape, size or colour. Scientists call this ‘advanced specification’. For example, we might search for a friend at a busy railway station by scanning the platform for someone who is very tall or who is wearing a green coat, or a combination of these characteristics.
Researchers from the School of Psychology at the University of Lincoln, UK, set out to better explain how these abstract visual representations are formed. They used fMRI scanners to record neural activity when volunteers prepared to search for a target object: a coloured letter amid a screen of other coloured letters.
Their findings, published in the journal ‘Brain Research’, are the first to fully isolate the different areas of the human brain involved in this ‘prepare to search’ function. Surprisingly, they show that the advanced frontal areas of the brain, usually key to advanced cognitive tasks, appear to take a backseat. Instead it is the basic back areas of the brain and the sub-cortical areas that do the work.
Dr Patrick Bourke from the University of Lincoln’s School of Psychology, who led the study, said: “Up until now, when researchers have studied visual search tasks they have also found that frontal areas of the brain were active. This has been assumed to indicate a control system: an ‘executive’ that largely resides in the advanced front of the brain which sends signals to the simpler back of the brain, activating visual memories. Here, when we isolated the ‘prepare’ part of the task from the actual search and response phase we found that this activation in the front was no longer present.”
This finding has important implications for understanding the fundamental brain processes involved. It was previously thought that the Intra-parietal region of the brain, which is linked to visual attention, was the central component of the supposed ‘front-back’ control network, relaying useful information (such as a shape or colour bias) from frontal areas of the brain to the back, where simple visual representations of the object are held. If the frontal areas are not activated in the preparation phase, this cannot be the case.
The study also showed that the pattern of brain activation varied depending on the anticipated difficulty of the search task, even when the target object was the same. This indicates that rather than holding in mind a single representation of an object, a new target is constructed each time, depending on the nature of the task.
Dr Bourke added: “While consistent with previous brain imaging work on visual search, these results change the interpretations and assumptions that have been applied previously. Notably, they highlight a difference between studies of animals’ brains and those of humans. Studies with monkeys convincingly show the front-back control system and we thought we understood how this worked. At the same time our findings are consistent with a growing body of brain imaging work in humans that also shows no frontal brain activation when short term memories are held.”
A University at Buffalo education professor has sided with the environment in the timeless “nurture vs. nature” debate after his research found that a child’s ability to read depends mostly on where that child is born, rather than on his or her individual qualities.
“Individual characteristics explain only 9 percent of the differences in children who can read versus those who cannot,” says Ming Ming Chiu, lead author of an international study that explains this connection and a professor in the Department of Learning and Instruction in UB’s Graduate School of Education.
“In contrast, country differences account for 61 percent and school differences account for 30 percent,” Chiu says.
Therefore, he concludes, the country in which a child is born largely determines whether he or she will have at least basic reading skills. It’s clearly a case where “nurture” — the environment and surroundings of the child — is more important than “nature” — the child’s inherited, individual qualities, according to Chiu.
More than 99 percent of fourth-graders in the Netherlands can read, but only 19 percent of fourth-graders in South Africa can read, Chiu notes.
“Although the richest countries typically have high literacy rates exceeding 97 percent,” he says, “some rich countries, such as Qatar and Kuwait, have low literacy rates — 33 percent and 28 percent, respectively.”
The study, “Ecological, Psychological and Cognitive Components of Reading Difficulties: Testing the Component Model of Reading in Fourth-graders Across 38 Countries,” analyzed reading test scores of 186,725 fourth-graders from 38 countries, including more than 4,000 children from the U.S. Chiu and co-authors Catherine McBride-Chang of the Chinese University of Hong Kong and Dan Lin of the Hong Kong Institute of Education published the study in the winter 2013 issue of the Journal of Learning Disabilities.
The educators used data from the Organization for Economic Cooperation and Development’s Program for International Student Assessment.
Besides showing that the country of origin was a better predictor of reading skills than individual traits, the study also showed that other attributes at the child, school and country levels were all related to reading.
First, girls were more likely than boys to have basic reading skills, Chiu says. Children with greater early-literacy skills, better attitudes about reading or greater self-confidence in their reading ability also were more likely to have strong basic reading skills.
“Children were more likely to have basic reading skills if they were from privileged families, as measured through socioeconomic status, number of books at home and parent attitudes about reading,” says Chiu. “Also, children attending schools with better school climate and more resources were more likely to have basic reading skills.
“Our U.S. culture values ‘can-do’ individualism, but we forget how much depends on being lucky enough to be born in the right place,” he says.
University of Arizona doctoral degree candidate Jay Sanguinetti has authored a new study, published online in the journal Psychological Science, that indicates that the brain processes and understands visual input that we may never consciously perceive.
The finding challenges currently accepted models about how the brain processes visual information.
A doctoral candidate in the UA’s Department of Psychology in the College of Science, Sanguinetti showed study participants a series of black silhouettes, some of which contained meaningful, real-world objects hidden in the white spaces on the outsides.
Saguinetti worked with his adviser Mary Peterson, a professor of psychology and director of the UA’s Cognitive Science Program, and with John Allen, a UA Distinguished Professor of psychology, cognitive science and neuroscience, to monitor subjects’ brainwaves with an electroencephalogram, or EEG, while they viewed the objects.
"We were asking the question of whether the brain was processing the meaning of the objects that are on the outside of these silhouettes," Sanguinetti said. "The specific question was, ‘Does the brain process those hidden shapes to the level of meaning, even when the subject doesn’t consciously see them?"
The answer, Sanguinetti’s data indicates, is yes.
Study participants’ brainwaves indicated that even if a person never consciously recognized the shapes on the outside of the image, their brains still processed those shapes to the level of understanding their meaning.
"There’s a brain signature for meaningful processing," Sanguinetti said. A peak in the averaged brainwaves called N400 indicates that the brain has recognized an object and associated it with a particular meaning.
"It happens about 400 milliseconds after the image is shown, less than a half a second," said Peterson. "As one looks at brainwaves, they’re undulating above a baseline axis and below that axis. The negative ones below the axis are called N and positive ones above the axis are called P, so N400 means it’s a negative waveform that happens approximately 400 milliseconds after the image is shown."
The presence of the N400 peak indicates that subjects’ brains recognize the meaning of the shapes on the outside of the figure.
"The participants in our experiments don’t see those shapes on the outside; nonetheless, the brain signature tells us that they have processed the meaning of those shapes," said Peterson. "But the brain rejects them as interpretations, and if it rejects the shapes from conscious perception, then you won’t have any awareness of them."
"We also have novel silhouettes as experimental controls," Sanguinetti said. "These are novel black shapes in the middle and nothing meaningful on the outside."
The N400 waveform does not appear on the EEG of subjects when they are seeing truly novel silhouettes, without images of any real-world objects, indicating that the brain does not recognize a meaningful object in the image.
"This is huge," Peterson said. "We have neural evidence that the brain is processing the shape and its meaning of the hidden images in the silhouettes we showed to participants in our study."
The finding leads to the question of why the brain would process the meaning of a shape when a person is ultimately not going to perceive it, Sanguinetti said.
"The traditional opinion in vision research is that this would be wasteful in terms of resources," he explained. "If you’re not going to ultimately see the object on the outside why would the brain waste all these processing resources and process that image up to the level of meaning?"
"Many, many theorists assume that because it takes a lot of energy for brain processing, that the brain is only going to spend time processing what you’re ultimately going to perceive," added Peterson. "But in fact the brain is deciding what you’re going to perceive, and it’s processing all of the information and then it’s determining what’s the best interpretation."
"This is a window into what the brain is doing all the time," Peterson said. "It’s always sifting through a variety of possibilities and finding the best interpretation for what’s out there. And the best interpretation may vary with the situation."
Our brains may have evolved to sift through the barrage of visual input in our eyes and identify those things that are most important for us to consciously perceive, such as a threat or resources such as food, Peterson suggested.
In the future, Peterson and Sanguinetti plan to look for the specific regions in the brain where the processing of meaning occurs.
"We’re trying to look at exactly what brain regions are involved," said Peterson. "The EEG tells us this processing is happening and it tells us when it’s happening, but it doesn’t tell us where it’s occurring in the brain."
"We want to look inside the brain to understand where and how this meaning is processed," said Peterson.
Images were shown to Sanguinetti’s study participants for only 170 milliseconds, yet their brains were able to complete the complex processes necessary to interpret the meaning of the hidden objects.
"There are a lot of processes that happen in the brain to help us interpret all the complexity that hits our eyeballs," Sanguinetti said. "The brain is able to process and interpret this information very quickly."
Sanguinetti’s study indicates that in our everyday life, as we walk down the street, for example, our brains may recognize many meaningful objects in the visual scene, but ultimately we are aware of only a handful of those objects.
The brain is working to provide us with the best, most useful possible interpretation of the visual world, Sanguinetti said, an interpretation that does not necessarily include all the information in the visual input.
People who are in love are less able to focus and to perform tasks that require attention. Researcher Henk van Steenbergen concludes this, together with colleagues from Leiden University and the University of Maryland. The article has appeared in the journal Motivation and Emotion.
The more in love, the less focused you are
Forty-three participants who had been in a relationship for less than half a year performed a number of tasks during which they had to discriminate irrelevant from relevant information as soon as possible. It appeared that the more in love they were, the less able they were to ignore the irrelevant information. Love intensity thus was related to how well someone is able to focus. There was no difference between men and women.
The participants listened to music that elicited romantic feelings and thought of a romantic event to intensify their love feelings. Participants also completed a questionnaire that was used to assess the intensity of their love feelings. The results of the study by Henk van Steenbergen differed from results from previous studies. Those previous studies showed that the ability to ignore distracting information is required to maintain a long-term romantic relationship. Being able to control oneself (also called “cognitive control”) and to resist temptations that could threaten the relationship is essential in long-term love.
Thinking of your beloved
In the study by Van Steenbergen, in contrast, the participants had become involved in a romantic relationship only a few months ago. “When you have just become involved in a romantic relationship you’ll probably find it harder to focus on other things because you spend a large part of your cognitive resources on thinking of your beloved”, Van Steenbergen says. “For long-lasting love in a long-term relationship, on the other hand, it seems crucial to have proper cognitive control.” Over time, a balance between less and more cognitive control may be critical for a successful relationship.
Why is romantic love associated with cognitive control?
Van Steenbergen emphasizes that the link between romantic love and cognitive control is a new area of research. “The reason why romantic love is associated with cognitive control is still unknown. It could be that lovers use all their cognitive resources to think about their beloved, which leaves them no resources to perform a boring task. It could also be that the association goes in the opposite direction: people who have reduced cognitive control may experience more intense love feelings than people who have higher levels of cognitive control.” Future research will have to clarify this.