Posts tagged psychology

Posts tagged psychology
Psychology Prof. Richard Russell reveals a new sign of aging in perception research
The contrasting nature of facial features is one of the signals that people unconsciously use to decipher how old someone looks, says Psychology Prof. Richard Russell, who has been collaborating with researchers from CE.R.I.E.S. (Epidermal and Sensory Research and Investigation Center), a department of Chanel Research and Technology dedicated to skin related issues and facial appearance.
“Unlike with wrinkles, none of us are consciously aware that we’re using this cue, even though it stares us in the face every day,” said Russell.
The discovery of this cue to facial age perception may partly explain why cosmetics are worn the way they are, and it lends more evidence to the idea that makeup use reflects our biological as well as our cultural heritage, according to Russell.
In one study, Russell and his team measured images of 289 faces ranging in age from 20 to 70 years old, and found that through the aging process, the color of the lips, eyes and eyebrows change, while the skin becomes darker. This results in less contrast between the features and the surrounding skin – leaving older faces to have less contrast than younger faces.
The difference in redness between the lips and the surrounding skin decreases, as does the luminance difference between the eyebrow and the forehead, as the face ages. Although not consciously aware of this sign of aging, the mind uses it as a cue for perceiving how old someone is.
In another study involving more than a hundred subjects in Gettysburg and Paris, the scientists artificially increased these facial contrasts and found that the faces were perceived as younger. When they artificially decreased the facial contrasts, the faces were perceived as older.
The image shows two identical images of the same face, except that the facial contrast has been increased in the left image and decreased in the right image. The face on the left appears younger than the one on the right.
Cosmetics are commonly used to increase aspects of facial contrast, such as the redness of lips. Scientists propose that this can partly explain why makeup is worn the way that it is – shades of lipstick that increase the redness of the lips are making the face appear younger, which is related to healthiness and beauty.
More on Russell’s study is available from PLOS ONE, an open-access publisher that makes the world’s scientific and medical literature a public resource.
Researchers in the UK have taken an important step towards understanding how the human brain ‘decodes’ letters on a page to read a word. The work, funded by the Economic and Social Research Council (ESRC), will help psychologists unravel the subtle thinking mechanisms involved in reading, and could provide solutions for helping people who find it difficult to read, for example in conditions such as dyslexia.
In order to read successfully, readers need not only to identify the letters in words, but also to accurately code the positions of those letters, so that they can distinguish words like CAT and ACT. At the same time, however, it’s clear that raeders can dael wtih wodrs in wihch not all teh leettrs aer in thier corerct psotiions.
"How the brain can make sense of some jumbled sequences of letters but not others is a key question that psychologists need to answer to understand the code that the brain uses when reading," says Professor Colin Davis of Royal Holloway, University of London, who led the research.
For many years researchers have used a standard psychological test to try to work out which sequences of letters in a word are important cues that the brain uses, where jumbled words are flashed momentarily on a screen to see if they help the brain to recognise the properly spelt word.
But, this technique had limitations that made it impossible to probe more extreme rearrangements of sequences of letters. Professor Davis’s team used computer simulations to work out that a simple modification to the test would allow it to question these more complex changes to words. This increases the test’s sensitivity significantly and makes it far more valuable for comparing different coding theories.
"For example, if we take the word VACATION and change it to AVACITNO, previously the test would not tell us if the brain recognises it as VACATION because other words such as AVOCADO or AVIATION might start popping into the person’s head,” says Professor Davis. "With our modification we can show that indeed the brain does relate AVACITNO to VACATION, and this starts to give us much more of an insight into the nature of the code that the brain is using – something that was not possible with the existing test."
The modified test should allow researchers not only to crack the code that the brain uses to make sense of strings of letters, but also to examine differences between individuals – how a ‘good’ reader decodes letter sequences compared with someone who finds reading difficult.
"These kinds of methods can be very sensitive to individual differences in reading ability and we are starting to get a better idea of some of the issues that underpin people’s difficulty in reading," says Professor Davis. Ultimately, this could lead to new approaches to helping people to overcome reading problems.
(Source: esrc.ac.uk)
Researchers Show that Suppressing the Brain’s “Filter” Can Improve Performance in Creative Tasks
The brain’s prefrontal cortex is thought to be the seat of cognitive control, working as a kind of filter that keeps irrelevant thoughts, perceptions and memories from interfering with a task at hand.
Now, researchers at the University of Pennsylvania have shown that inhibiting this filter can boost performance for tasks in which unfiltered, creative thoughts present an advantage.
The research was conducted by Sharon Thompson-Schill, the Christopher H. Browne Distinguished Professor of Psychology and director of the Center for Cognitive Neuroscience, and Evangelia Chrysikou, a member of her lab who is now an assistant professor at the University of Kansas. They collaborated with Roy Hamilton and H. Branch Coslett of the Department of Neurology at Penn’s Perelman School of Medicine and Abhishek Datta and Marom Bikson of the Department of Biomedical Engineering at the City College of New York.
Their work was published in the journal Cognitive Neuroscience.

Enhancing Cognition with Video Games: A Multiple Game Training Study
Background
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.
Methodology/Principal Findings
We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.
Conclusion/Significance
Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.

Punishment can enhance performance
The stick can work just as well as the carrot in improving our performance, a team of academics at The University of Nottingham has found.
A study led by researchers from the University’s School of Psychology, published recently in the Journal of Neuroscience, has shown that punishment can act as a performance enhancer in a similar way to monetary reward.
Dr Marios Philiastides, who led the work, said: “This work reveals important new information about how the brain functions that could lead to new methods of diagnosing neural development disorders such as autism, ADHD and personality disorders, where decision-making processes have been shown to be compromised.”
The Nottingham study aimed at looking at how the efficiency with which we make decisions based on ambiguous sensory information — such as visual or auditory — is affected by the potential for, and severity of, anticipated punishment.
Imposing penalties
To investigate this, they asked participants in the study to perform a simple perceptual task — asking them to judge whether a blurred shape behind a rainy window is a person or something else.
They punished incorrect decisions by imposing monetary penalties. At the same time, they measured the participants’ brain activity in response to different amounts of monetary punishment. Brain activity was recorded, non-invasively, using an EEG machine which detects and amplifies brain signals from the surface of the scalp through a set of small electrodes embedded in a swim-like cap fitted on the participants’ head.
They found that participants’ performance increased systematically as the amount of punishment increased, suggesting that punishment acts as a performance enhancer in a similar way to monetary reward.
At the neural level, the academics identified multiple and distinct brain activations induced by punishment and distributed throughout different areas of the brain. Crucially, the timing of these activations confirmed that the punishment does not influence the way in which the brain processes the sensory evidence but does have an impact on the brain’s decision maker responsible for decoding sensory information at a later stage in the decision-making process.
Incentive-based motivation
Finally, they showed that those participants who showed the greatest improvements in performance also showed the biggest changes in brain activity. This is a key finding as it provides a potential route to study differences between individuals and their personality traits in order to characterise why some may respond better to reward and punishment than others.
A more thorough understanding of the influence of punishment on decision-making and how we make choices could lead to useful information on how to use incentive-based motivation to encourage certain behaviour.
The paper, Temporal Characteristics of the Influence of Punishment on Perceptual Decision Making in the Human Brain, is available online via the Journal of Neuroscience.
'I don't want to pick!' Preschoolers know when they aren't sure
Children as young as 3 years old know when they are not sure about a decision, and can use that uncertainty to guide decision making, according to new research from the Center for Mind and Brain at the University of California, Davis.
"There is behavioral evidence that they can do this, but the literature has assumed that until late preschool, children cannot introspect and make a decision based on that introspection," said Simona Ghetti, professor of psychology at UC Davis and co-author of the study with graduate student Kristen Lyons, now an assistant professor at Metropolitan State University of Denver. [Preschoolers Use Introspection to Make Decisions]
The findings are published online by the journal Child Development and will appear in print in an upcoming issue.
Ghetti studies how reasoning, memory and cognition emerge during childhood. It is known that children get better at introspection through elementary school, she said. Lyons and Ghetti wanted to see whether this ability to ponder exists in younger children.
Previous studies have used open-ended questions to find out how children feel about a decision, but that approach is limited by younger children’s ability to report on the content of their mental activity. Instead, Lyons and Ghetti showed 3-, 4- and 5-year-olds ambiguous drawings of objects and asked them to point to a particular object, such as a cup, a car or the sun. Then they asked the children to point to one of two pictures of faces, one looking confident and one doubtful, to rate whether they were confident or not confident about a decision.
In one of the tests, children had to choose a drawing even if unsure. In a second set of tests they had a “don’t want to pick” option.
Across the age range, children were more likely to say they were not confident about their decision when they had in fact made a wrong choice. When they had a “don’t know” option, they were most likely to take it if they had been unsure of their choice in the “either/or” test.
By opting not to choose when uncertain, the children could improve their overall accuracy on the test.
"Children as young as 3 years of age are aware of when they are making a mistake, they experience uncertainty that they can introspect on, and then they can use that introspection to drive their decision making," Ghetti said.
The researchers hope to extend their studies to younger children to examine the emergence of introspection and reasoning.
(Image: Jupiter Images)
Cognitive impairments are disabling for individuals with schizophrenia, and no satisfactory treatments currently exist. These impairments affect a wide range of cognition, including memory, attention, verbal and motor skills, and IQ. They appear in the earliest stages of the disease and disrupt or even prevent normal day-to-day functioning.
Scientists are exploring a variety of strategies to reduce these impairments including “exercising the brain” with specially designed computer games and medications that might improve the function of brain circuits.
In this issue of Biological Psychiatry, Dr. Mera Barr and her colleagues at University of Toronto provide new evidence that stimulating the brain using repetitive transcranial magnetic stimulation (rTMS) may be an effective strategy to improve cognitive function.
“In a randomized controlled trial, we evaluated whether rTMS can improve working memory in schizophrenia,” said Barr and senior author Dr. Zafiris Daskalakis. “Our results showed that rTMS resulted in a significant improvement in working memory performance relative to baseline.”
Transcranial magnetic stimulation is a non-invasive procedure that uses magnetic fields to stimulate nerve cells. It does not require sedation or anesthesia and so patients remain awake, reclined in a chair, while treatment is administered through coils placed near the forehead.
“TMS can have lasting effects on brain circuit function because this approach not only changes the activity of the circuit that is being stimulated, but it also may change the plasticity of that circuit, i.e., the capacity of the circuit to remodel itself functionally and structurally to support cognitive functions,” explained Dr. John Krystal, Editor of Biological Psychiatry.
Previous work has shown that rTMS improves working memory in healthy individuals and a recent open-label trial showed promising findings for verbal memory in schizophrenia patients. This series of findings led this study to determine if high frequency rTMS could improve memory in individuals with schizophrenia.
They recruited medicated schizophrenia patients who completed a working memory task before and after 4 weeks of treatment. Importantly, this was a double-blind study, where neither the patients nor the researchers knew who was receiving real rTMS or a sham treatment that was designed to entirely mimic the procedure without actually delivering brain stimulation.
rTMS not only improved working memory in patients after 4 weeks, but the improvement was to a level comparable to healthy subjects. These findings suggest that rTMS may be a novel, efficacious, and safe treatment for working memory deficits in schizophrenia.
In 2008, rTMS was FDA-approved to treat depression for individuals who don’t respond to pharmacotherapy. The hope is that additional research will replicate these findings and finally provide an approved treatment for cognitive impairments in schizophrenia.
The authors concluded: “Working memory is an important predictor of functional outcome. Developing novel treatments aimed at improving these deficits may ultimately translate into meaningful changes in the lives of patients suffering from this debilitating disorder.”
(Source: elsevier.com)
Sleep loss precedes Alzheimer’s symptoms
Sleep is disrupted in people who likely have early Alzheimer’s disease but do not yet have the memory loss or other cognitive problems characteristic of full-blown disease, researchers at Washington University School of Medicine in St. Louis report March 11 in JAMA Neurology.
The finding confirms earlier observations by some of the same researchers. Those studies showed a link in mice between sleep loss and brain plaques, a hallmark of Alzheimer’s disease. Early evidence tentatively suggests the connection may work in both directions: Alzheimer’s plaques disrupt sleep, and lack of sleep promotes Alzheimer’s plaques.
“This link may provide us with an easily detectable sign of Alzheimer’s pathology,” says senior author David M. Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of Washington University’s Department of Neurology. “As we start to treat people who have markers of early Alzheimer’s, changes in sleep in response to treatments may serve as an indicator of whether the new treatments are succeeding.”
Sleep problems are common in people who have symptomatic Alzheimer’s disease, but scientists recently have begun to suspect that they also may be an indicator of early disease. The new paper is among the first to connect early Alzheimer’s disease and sleep disruption in humans.
(Image: iStockphoto)
If that headline makes you feel bad, an expert says it’s because we’re genetically wired to take offense.
Insults are painful because we have certain social needs. We seek to be among other people, and once among them, we seek to form relationships with them and to improve our position on the social hierarchy. They are also painful because we have a need to project our self-image and to have other people not only accept this image, but support it. If we didn’t have these needs, being insulted wouldn’t feel bad. Furthermore, although different people experience different amounts of pain on being insulted, almost everyone will experience some pain. Indeed, we would search long and hard to find a person who is never pained by insults—or who himself never feels the need to insult others.
These observations raise a question: why do we have the social needs we do? According to evolutionary psychologists, our social needs—and, more generally, our psychological propensities—are the result of nature rather than nurture. More precisely, they are a consequence of our evolutionary past. The views of evolutionary psychologists are of interest in this, a study of insults, for the simple reason that they allow us to gain a deeper understanding of why it is painful when others insult us and why we go out of our way to cause others pain by insulting them.
We humans find some things to be pleasant and other things to be unpleasant. We find it pleasant, for example, to eat sweet, fattening foods or to have sex, and we find it unpleasant to be thirsty, swallow bitter substances, or get burned. Notice that we don’t choose for these things to be pleasant or unpleasant. It is true that we can, if we are strong-willed, voluntarily do things that are unpleasant, such as put our finger in a candle flame. We can also refuse to do things that are pleasant: we might, for example, forgo opportunities to have sex. But this doesn’t alter the basic biological fact that getting burned is painful and having sex is pleasurable. Whether or not an activity is pleasant is determined, after all, by our wiring, and we do not have it in our power—not yet, at any rate—to alter this wiring.
Why are we wired to be able to experience pleasure and pain? Why aren’t we wired to be immune to pain while retaining our ability to experience pleasure? And given that we possess the ability to experience both pleasure and pain, why do we find a particular activity to be pleasant rather than painful? Why, for example, do we find it pleasant to have sex but unpleasant to get burned? Why not the other way around? I have given the long answer to these questions elsewhere. For our present purposes—namely, to explain why we have the social needs we do—the short answer will suffice.
We have the ability to experience pleasure and pain because our evolutionary ancestors who had this ability were more likely to survive and reproduce than those who didn’t. Creatures with this ability could, after all, be rewarded (with pleasurable feelings) for engaging in certain activities and punished (with unpleasant feelings) for engaging in others. More precisely, they could be rewarded for doing things (such as having sex) that would increase their chances of surviving and reproducing, and be punished for doing things (such as burning themselves) that would lessen their chances.
This makes it sound as if a designer was responsible for our wiring, but evolutionary psychologists would reject this notion. Evolution, they would remind us, has no designer and no goal. To the contrary, species evolve because some of their members, thanks to the genetic luck-of-the-draw, have a makeup that increases their chances of surviving and reproducing. As a result, they (probably) have more descendants than genetically less fortunate members of their species. And because they spread their genes more effectively, they have a disproportionate influence on the genetic makeup of future members of their species.
Evolutionary psychologists would go on to remind us that if our evolutionary ancestors had found themselves in a different environment, we would be wired differently and as a result would find different things to be pleasant and unpleasant. Suppose that getting burned, rather than being detrimental to our evolutionary ancestors, had somehow increased their chances of surviving and reproducing. Under these circumstances, those individuals who were wired so that it felt good to get burned would have been more effective at spreading their genes than those who were wired so that it felt bad. And as a result we, their descendants, would also be wired so that it felt good to get burned.
Evolutionary psychologists would also remind us that the evolutionary process is imperfect. For one thing, although the wiring we inherited from our ancestors might have allowed them to flourish on the savannahs of Africa, it isn’t optimal for the rather different environment in which we today find ourselves. Our ancestors who had a penchant for consuming sweet, fattening foods, for example, were less likely to starve than those who didn’t. The problem is that we who have inherited that penchant live in an environment in which sweet, fattening foods are abundant. In this environment, being wired so that it is pleasant to consume, say, ice cream, increases our chance of getting heart disease and other illnesses, and thereby arguably lessens our chance of surviving.
Chewing gum helps you concentrate for longer
Chewing gum can help you stay focused for longer on tasks that require continuous monitoring.
This is the finding of new research by Kate Morgan and colleagues from Cardiff University published in the British Journal of Psychology.
Previous research has shown that chewing gum can improve concentration in visual memory tasks. This study focussed on the potential benefits of chewing gum during an audio memory task.
Kate Morgan, author of the study explained: “It’s been well established by previous research that chewing gum can benefit some areas of cognition. In our study we focussed on an audio task that involved short-term memory recall to see if chewing gum would improve concentration; especially in the latter stages of the task.”
The study involved 38 participants being split in to two groups. Both groups completed a 30 minute audio task that involved listening to a list of numbers from 1-9 being read out in a random manner. Participants were scored on how accurately and quickly they were able to detect a sequence of odd-even-odd numbers, such as 7-2-1. Participants also completed questionnaires on their mood both before and after the task.
The results showed that participants who chewed gum had quicker reaction times and more accurate results than the participants who didn’t chew gum. This was especially the case towards the latter parts of the task.
Kate explained: “Interestingly participants who didn’t chew gum performed slightly better at the beginning of the task but were overtaken by the end. This suggests that chewing gum helps us focus on tasks that require continuous monitoring over a longer amount of time.”
The study was discussed in Radio Four Today programme.
(Image: iStock)