Posts tagged psychology

Posts tagged psychology
Out of shape? Your memory may suffer
Here’s another reason to drop that doughnut and hit the treadmill: A new study suggests aerobic fitness affects long-term memory.
Michigan State University researchers tested 75 college students during a two-day period and found those who were less fit had a harder time retaining information.
“The findings show that lower-fit individuals lose more memory across time,” said Kimberly Fenn, study co-author and assistant professor of psychology.
The study, which appears online in the research journal Cognitive, Affective & Behavioral Neuroscience, is one of the first to investigate young, supposedly healthy adults. Previous research on fitness and memory has focused largely on children, whose brains are still developing, and the elderly, whose memories are declining.
Participants studied related word pairs such as “camp” and “trail.” The next day, they were tested on the word pairs to evaluate long-term memory retention. Long-term memory is anything remembered more than about 30 seconds ago.
Aerobic fitness was gauged by oxygen consumption derived from a treadmill test and factored with the participants’ weight, percent body fat, age and sex.
The findings speak to the increasingly sedentary lifestyles found in the United States and other Western cultures. A surprising number of the college students in the study were significantly out of shape and did much worse at retaining information than those who were extremely fit, Fenn said.
Her co-authors included kinesiology researchers Matthew Pontifex and Karin Pfeiffer.
Activity in areas of the brain related to reward and self-control may offer neural markers that predict whether people are likely to resist or give in to temptations, like food, in daily life, according to research in Psychological Science, a journal of the Association for Psychological Science.

“Most people have difficulty resisting temptation at least occasionally, even if what tempts them differs,” say psychological scientists Rich Lopez and Todd Heatherton of Dartmouth College, authors on the study. “The overarching motivation of our work is to understand why some people are more likely to experience this self-regulation failure than others.”
The research findings reveal that activity in reward areas of the brain in response to pictures of appetizing food predicts whether people tend to give in to food cravings and desires in real life, whereas activity in prefrontal areas during taxing self-control tasks predicts their ability to resist tempting food.
Lopez and colleagues used functional MRI (fMRI) to explore the interplay between activity in prefrontal brain regions associated with self-control (e.g., inferior frontal gyrus) and subcortical areas involved in affect and reward (e.g., nucleus accumbens), and to see whether the interplay between these regions predicts how successful (or unsuccessful) people are in controlling their desires to eat on a daily basis.
The researchers recruited 31 female participants to take part in an initial fMRI scanning session that included two important tasks.
For the first task, the participants were presented with various images, including some of high-calorie foods, like dessert items, fast-food items, and snacks. The participants were simply asked to indicate whether each image was set indoors or outdoors — the researchers were specifically interested in measuring activity in the nucleus accumbens in response to the food-related images.
For the second task, the participants were asked to press or not press a button based on the specific cues provided with each image, a task designed to gauge self-control ability. During this task, the researchers measured activity in the inferior frontal gyrus (IFG).
The fMRI scanning session was followed by 1 week of so-called “experience sampling,” in which participants were signaled several times a day on a smartphone and asked to report their food desires and eating behaviors. Any time participants reported a food desire, they were then asked about the strength of the desire and their resistance to it. If they ultimately gave in to the craving, they were asked to say how much they had eaten.
As expected, participants who had relatively higher activity in the nucleus accumbens in response to the food images tended to experience more intense food desires. More importantly, they were also more likely to give in to their food cravings and eat the desired food.
The researchers were surprised by how robust this association was:
“Reward-related brain activity, which can be considered an implicit measure, predicted who gave in to temptations to eat, as well as who ate more, above and beyond the desire strength reported by participants in the moment,” say Lopez and Heatherton. “This could help to explain a previous finding from our lab that people who show this kind of brain activity the most are also the most likely to gain weight over six months.”
But brain activity also predicted who was more likely to be able to resist temptation: Participants who showed relatively higher IFG activity on the self-control task acted on their cravings less often.
When the researchers grouped the participants according to their IFG activity, the data revealed that participants who had high IFG activity were more successful at controlling how much they ate in particularly tempting situations than those who had low IFG activity. In fact, participants with low IFG activity were about 8.2 times more likely to give in to a food desire than those who had high IFG activity.
“Taken together, the results from the present study provide initial evidence for neural markers of everyday eating behaviors that can identify individuals who are more likely than others to give in to temptations to eat,” the researchers write.
Lopez, Heatherton, and colleagues are currently conducting studies focused on groups of people who are especially prone to self-regulation failure: chronic dieters.
They’re investigating, for example, how dieters’ brains respond to food cues after they’ve exhausted their self-control resources. The researchers hypothesize that depleting self-control may heighten reward-related brain activity, effectively “turning up the volume on temptations,” and predicting behaviors like overeating in daily life.
“Failures of self-control contribute to nearly half of all death in the United States each year,” the researchers note. “Our findings and future research may ultimately help people learn ways to resist their temptations.”
In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain.
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact."
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable.
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words.
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University.

(Image caption: MRI images from a neurotypical control (left) and an adult with complete agenesis of the corpus callosum (right). The corpus callosum is indicated in red, fading as the fibers enter the hemispheres in order to suggest that they continue on. The anterior commissure is indicated by light aqua. The image illustrates the dramatic lack of inter hemispheric connections in callosal agenesis. Credit: Lynn Paul/Caltech)
Research Update: An Autism Connection
Building on their prior work, a team of neuroscientists at Caltech now report that rare patients who are missing connections between the left and right sides of their brain—a condition known as agenesis of the corpus callosum (AgCC)—show a strikingly high incidence of autism. The study is the first to show a link between the two disorders.
The findings are reported in a paper published April 22, 2014, in the journal Brain.
The corpus callosum is the largest connection in the human brain, connecting the left and right brain hemispheres via about 200 million fibers. In very rare cases it is surgically cut to treat epilepsy—causing the famous “split-brain” syndrome, for whose discovery the late Caltech professor Roger Sperry received the Nobel Prize. People with AgCC are like split-brain patients in that they are missing their corpus callosum—except they are born this way. In spite of this significant brain malformation, many of these individuals are relatively high-functioning individuals, with jobs and families, but they tend to have difficulty interacting with other people, among other symptoms such as memory deficits and developmental delays. These difficulties in social behavior bear a strong resemblance to those faced by high-functioning people with autism spectrum disorder.
"We and others had noted this resemblance between AgCC and autism before," explains Lynn Paul, lead author of the study and a lecturer in psychology at Caltech. But no one had directly compared the two groups of patients. This was a challenge that the Caltech team was uniquely positioned to do, she says, since it had studied patients from both groups over the years and had tested them on the same tasks.
"When we made detailed comparisons, we found that about a third of people with AgCC would meet diagnostic criteria for an autism spectrum disorder in terms of their current symptoms," says Paul, who was the founding president of the National Organization for Disorders of the Corpus Callosum.
The research was done in the laboratory of Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech and a coauthor of the study. The team looked at a range of different tasks performed by both sets of patients. Some of the exercises that involved certain social behaviors were videotaped and analyzed by the researchers to assess for autism. The team also gave the individuals questionnaires to fill out that measured factors like intelligence and social functioning.
"Comparing different clinical groups on exactly the same tasks within the same lab is very rare, and it took us about a decade to accrue all of the data," Adolphs notes.
One important difference between the two sets of patients did emerge in the comparison. People with autism spectrum disorder showed autism-like behaviors in infancy and early childhood, but the same type of behaviors did not seem to emerge in individuals with AgCC until later in childhood or the teen years.
"Around ages 9 through 12, a normally formed corpus callosum goes through a developmental ‘growth spurt’ which contributes to rapid advances in social skills and abstract thinking during those years," notes Paul. "Because they don’t have a corpus callosum, teens with AgCC become more socially awkward at the age when social skills are most important."
According to Adolphs, it is important to note that AgCC can now be diagnosed before a baby is born, using high-resolution ultrasound imaging during pregnancy. This latest development also opens the door for some exciting future directions in research.
"If we can identify people with AgCC already before birth, we should be in a much better position to provide interventions like social skills training before problems arise," Paul points out. "And of course from a research perspective it would be tremendously valuable to begin studying such individuals early in life, since we still know so little both about autism and about AgCC."
For example, the team would like to discern at what age subtle difficulties first appear in AgCC individuals, and at what point they start looking similar to autism, as well as what happens in the brain during these changes.
"If we could follow a baby with AgCC as it grows up, and visualize its brain with MRI each year, we would gain such a wealth of knowledge," Adolphs says.
You know what you’re going to say before you say it, right? Not necessarily, research suggests. A study from researchers at Lund University in Sweden shows that auditory feedback plays an important role in helping us determine what we’re saying as we speak. The study is published in Psychological Science, a journal of the Association for Psychological Science.
“Our results indicate that speakers listen to their own voices to help specify the meaning of what they are saying,” says researcher Andreas Lind of Lund University, lead author of the study.

Theories about how we produce speech often assume that we start with a clear, preverbal idea of what to say that goes through different levels of encoding to finally become an utterance.
But the findings from this study support an alternative model in which speech is more than just a dutiful translation of this preverbal message:
“These findings suggest that the meaning of an utterance is not entirely internal to the speaker, but that it is also determined by the feedback we receive from our utterances, and from the inferences we draw from the wider conversational context,” Lind explains.
For the study, Lind and colleagues recruited Swedish participants to complete a classic Stroop test, which provided a controlled linguistic setting. During the Stroop test, participants were presented with various color words (e.g., “red” or “green”) one at a time on a screen and were tasked with naming the color of the font that each word was printed in, rather than the color that the word itself signified.
The participants wore headphones that provided real-time auditory feedback as they took the test — unbeknownst to them, the researchers had rigged the feedback using a voice-triggered playback system. This system allowed the researchers to substitute specific phonologically similar but semantically distinct words (“grey”, “green”) in real time, a technique they call “Real-time Speech Exchange” or RSE.
Data from the 78 participants indicated that when the timing of the insertions was right, only about one third of the exchanges were detected.
On many of the non-detected trials, when asked to report what they had said, participants reported the word they had heard through feedback, rather than the word they had actually said. Because accuracy on the task was actually very high, the manipulated feedback effectively led participants to believe that they had made an error and said the wrong word.
Overall, Lind and colleagues found that participants accepted the manipulated feedback as having been self-produced on about 85% of the non-detected trials.
Together, these findings suggest that our understanding of our own utterances, and our sense of agency for those utterances, depend to some degree on inferences we make after we’ve made them.
Most surprising, perhaps, is the fact that while participants received several indications about what they actually said — from their tongue and jaw, from sound conducted through the bone, and from their memory of the correct alternative on the screen — they still treated the manipulated words as though they were self-produced.
This suggests, says Lind, that the effect may be even more pronounced in everyday conversation, which is less constrained and more ambiguous than the context offered by the Stroop test.
“In future studies, we want to apply RSE to situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way an interview or conversation develops,” says Lind.
“While this is technically challenging to execute, it could potentially tell us a great deal about how meaning and communicative intentions are formed in natural discourse,” he concludes.

Stress is contagious. Observing another person in a stressful situation can be enough to make our own bodies release the stress hormone cortisol. This is the conclusion reached by scientists involved in a large-scale cooperation project between the departments of Tania Singer at the Max Planck Institute for Cognitive and Brain Sciences in Leipzig and Clemens Kirschbaum at the Technische Universität Dresden. Empathic stress arose primarily when the observer and stressed individual were partners in a couple relationship and the stressful situation could be directly observed through a one-way mirror. However, even the observation of stressed strangers via video transmission was enough to put some people on red alert. In our stress-ridden society, empathic stress is a phenomenon that should not be ignored by the health care system.
Stress is a major health threat in today’s society. It causes a range of psychological problems like burnout, depression and anxiety. Even those who lead relatively relaxed lives constantly come into contact with stressed individuals. Whether at work or on television: someone is always experiencing stress, and this stress can affect the general environment in a physiologically quantifiable way through increased concentrations of the stress hormone cortisol.
“The fact that we could actually measure this empathic stress in the form of a significant hormone release was astonishing,” says Veronika Engert, one of the study’s first authors. This is particularly true considering that many studies experience difficulties to induce firsthand stress to begin with. The authors found that empathic stress reactions could be independent of (“vicarious stress”) or proportional to (“stress resonance”) the stress reactions of the actively stressed individuals. “There must be a transmission mechanism via which the target’s state can elicit a similar state in the observer down to the level of a hormonal stress response.“
During the stress test, the test subjects had to struggle with difficult mental arithmetic tasks and interviews, while two supposed behavioural analysts assessed their performance. Only five percent of the directly stressed test subjects managed to remain calm; the others displayed a physiologically significant increase in their cortisol levels.
In total, 26 percent of observers who were not directly exposed to any stress whatsoever also showed a significant increase in cortisol. The effect was particularly strong when observer and stressed individual were partners in a couple relationship (40 percent). However, even when watching a complete stranger, the stress was transmitted to ten percent of the observers. Accordingly, emotional closeness is a facilitator but not a necessary condition for the occurrence of empathic stress.
When the observers watched the events directly through a one-way mirror, 30 percent of them experienced a stress response. However, even presenting the stress test only virtually via video transmission was sufficient to significantly increase the cortisol levels of 24 percent of the observers. “This means that even television programmes depicting the suffering of other people can transmit that stress to viewers,” says Engert. “Stress has enormous contagion potential.”
Stress becomes a problem primarily when it is chronic. “A hormonal stress response has an evolutionary purpose, of course. When you are exposed to danger, you want your body to respond with an increase in cortisol,” explains Engert. “However, permanently elevated cortisol levels are not good. They have a negative impact on the immune system and neurotoxic properties in the long term.” Thus, individuals working as caregivers or the family members of chronically stressed individualshave an increased risk to suffer from the potentially harmful consequences of empathic stress. Anyone who is confronted with the suffering and stress of another person, particularly when sustained, has a higher risk of being affected by it themselves.
The results of the study also debunked a common prejudice: men and women actually experience empathic stress reactions with equal frequency. “In surveys however, women tend to assess themselves as being more empathic compared to men’s self-assessments. This self-perception does not seem to hold if probed by implicit measures”
Future studies are intended to reveal exactly how the stress is transmitted and what can be done to reduce its potentially negative influence on society.
You Took the Words Right Out of My Brain
Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.
“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”
“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”
The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.
Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.
However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”
What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.
In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?
Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.
When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.
When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.
“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”
Methylphenidate, also known as Ritalin, may prevent the depletion of self-control, according to research published in Psychological Science, a journal of the Association for Psychological Science.

Self-control can be difficult — sticking with a diet or trying to focus attention on a boring textbook are hard things to do. Considerable research suggests one potential explanation for this difficulty: Exerting self-control for a long period seems to “deplete” our ability to exert self-control effectively on subsequent tasks.
“It is as if self-control is a limited resource that ‘runs out’ if it is used too much,” says lead researcher Chandra Sripada of the University of Michigan. “If we could figure out the brain mechanisms that cause regulatory depletion, then maybe we could find a way to prevent it.”
Previous research has implicated the neurotransmitters dopamine and norepinephrine in regulatory processing. Sripada and University of Michigan collaborators Daniel Kessler and John Jonides decided to see whether manipulating levels of these transmitters might affect regulatory depletion.
The researchers tested 108 adult participants, all of whom took a drug capsule 60 minutes prior to testing. Half of the participants received a capsule that contained methylphenidate, a medication used to treat ADHD that increases brain dopamine and norepinephrine. The other half received a placebo capsule. The study was double-blind, so neither the participants nor the researchers knew at the time of testing who had received which capsule.
The participants then completed a computer-based task in which they were required to press a button when a word containing the letter e appeared on screen. Some were given modified instructions that asked them to refrain from pressing the button if the letter e was next to or one extra letter away from another vowel — this version of the task was designed to tax participants’ self-control.
All of the participants then completed a second computer task aimed at testing their ability to process competing information and exert regulatory control in order to make a correct response.
In line with the researchers’ hypotheses, participants who received the placebo and performed the taxing version of the first task showed greater variability in how quickly they responded in the second task, compared to those whose self-control hadn’t been depleted in the first task.
But for those participants who took the methylphenidate capsule, the first task didn’t have an effect on later performance — the methylphenidate seemed to counteract the self-regulatory depletion incurred by the harder version of the first task.
“These results indicate that depletion of self-control due to prior effort can be fully blocked pharmacologically,” says Sripada. “The task we give people to deplete their self-control is pretty cognitively demanding, so we were surprised at how effective methylphenidate was in blocking depletion of self-control.”
Sripada and colleagues suggest that methylphenidate may help to boost performance of the specific circuits in the brain’s prefrontal cortex that are normally compromised after sustained exertion of self-control.
This doesn’t mean, however, that those of us looking to boost our self-control should go out and get some Ritalin:
“Methylphenidate is a powerful psychotropic medicine that should only be taken with a prescription,” says Sripada. “We want to use this research to better understand the brain mechanisms that lead to depletion of self-control, and what interventions — pharmacological or behavioral — might prevent this.”
Chimpanzees may throw tantrums like toddlers, but their total brain size suggests they have more self-control than, say, a gerbil or fox squirrel, according to a new study of 36 species of mammals and birds ranging from orangutans to zebra finches.

Scientists at Duke University, UC Berkeley, Stanford, Yale and more than two-dozen other research institutions collaborated on this first large-scale investigation into the evolution of self-control, defined in the study as the ability to inhibit powerful but ultimately counter-productive behavior. They found that the species with the largest brain volume – not volume relative to body size – showed superior cognitive powers in a series of food-foraging experiments.
Moreover, animals with the most varied diets showed the most self-restraint, according to the study published in the journal of the Proceedings of the National Academy of Sciences.
“The study levels the playing field on the question of animal intelligence,” said UC Berkeley psychologist Lucia Jacobs, a co-author of this study and of its precursor, a 2012 paper in the journal, Animal Cognition.
This latest study was led by evolutionary anthropologists Evan MacLean, Brian Hare and Charles Nunn of Duke University. The findings challenge prevailing assumptions that “relative” brain size is a more accurate predictor of intelligence than “absolute” brain size. One possibility, they posited, is that “as brains get larger, the total number of neurons increases and brains tend to become more modularized, perhaps facilitating the evolution of new cognitive networks.”
While participating researchers all performed the same series of experiments, they did so on their own turf and on their own animal subjects. Data was provided on bonobos, chimpanzees, gorillas, olive baboons, stump-tailed macaques, golden snub-nosed monkeys, brown, red-bellied and aye-aye lemurs, coyotes, dogs, gray wolves, Asian elephants, domestic pigeons, orange-winged amazons, Eurasian jays, western scrub jay, zebra finches and swamp sparrows.
Food inside a tube used as bait
In one experiment, creatures large and small were tested to see if they would advance toward a clear cylinder visibly containing food – showing a lack of self-restraint – after they had been trained to access the food through a side opening in an opaque cylinder. Large-brained primates such as gorillas quickly navigated their way to the treat or “bait.” Smaller-brained animals did so with mixed results.
Jacobs and UC Berkeley doctoral student Mikel Delgado contributed the only rodent data in the study, putting some of the campus’s fox squirrels and some Mongolian gerbils in their lab through food-foraging tasks.
Mixed results on campus squirrels’ self-restraint
In the case of the fox squirrels, the red-hued, bushy-tailed critters watched as the food was placed in a side opening of an opaque cylinder. Once they demonstrated a familiarity with the location of the opening, the food was moved to a transparent cylinder and the real test began. If the squirrels lunged directly at the food inside the bottle, they had failed to inhibit their response. But if they used the side entrance, the move was deemed a success.
“About half of the squirrels and gerbils did well and inhibited the direct approach in more than seven out of 10 trials,” Delgado said. “The rest didn’t do so well.”
In a second test, three cups (A, B and C) were placed in a row on their sides so the animals could see which one contained food. It was usually cup A. The cups were then turned upside down so the “baited” cup could no longer be seen. If the squirrels touched the cup with the food three times in a row, they graduated to the next round. This time, the food was moved from cup A to cup C at the other end of the row.
“The question was, would they approach cup A, where they had originally learned the food was placed, or could they update this learned response to get the food from a new location?” Delgado said. “The squirrels and gerbils tended to go to the original place they had been trained to get food, showing a failure to inhibit what they originally learned.” Click here for video showing other animals doing the cup test.
“It might be that a squirrel’s success in life is affected the same way as in people,” Jacobs said. “By its ability to slow down and think a bit before it snatches at a reward.”
(Source: newscenter.berkeley.edu)

Smoking’s toll on mentally ill analyzed
Those in the United States with a mental illness diagnosis are much more likely to smoke cigarettes and smoke more heavily, and are less likely to quit smoking than those without mental illness, regardless of their specific diagnosis, a new study by researchers from the Yale School of Medicine shows.
They also found variations in smoking rates and likelihood of quitting among different diagnoses of mental illness. The results are reported in the April issue of the journal Tobacco Control.
Thirty-nine percent of adults with a psychiatric diagnosis smoked compared to 16% without a diagnosis, according to data from the National Epidemiologic Survey on Alcohol and Related Conditions analyzed by researchers. Two out of every three people with drug use disorder smoke, compared to one out of three with social phobia.
“We know that smokers with mental illness are more susceptible to smoking-related disease, and those with mental illness die 25 years earlier than adults without mental illness,” said Sherry McKee, associate professor of psychiatry, and senior author on the study. “Effective smoking cessation treatments are available and we know that smokers with mental illness can quit smoking. We need to address why smokers with mental illness are not being treated for their smoking.”
Over the three-year study period, 22% of smokers with no psychiatric disorders were able to quit smoking, whereas rates of quitting among those with psychiatric disorders were 25% lower. Rates of quitting were lowest among those with dysthymia (10%), agoraphobia (13%), and social phobia (13%). “We also found that individuals with multiple diagnoses had the lowest quit rates,” added Philip Smith, lead author on the study.
This study adds to evidence that smokers with mental illness consume nearly half of all cigarettes in the United States, despite making up a substantially smaller proportion of the population.
Researchers and policymakers are increasingly calling attention to this important public health issue, and this study helps point to a need for interventions and policy that directly help individuals with mental illness quit smoking.