Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

319 notes

Video: The animation describes the paths of traveling performed by an OCD patient who is about to leave his apartment (left) and by a co-morbid OCD and schizophrenia patient performing the same behavior (right). Black circles indicate the number of acts performed in each location. As shown, the COD patient is mostly stationary, while the schizo-OCD patient travels all over the apartment.

The Difference Between Obsession and Delusion

TAU researchers use a zoological method to classify symptoms of OCD and schizophrenia in humans

Because animals can’t talk, researchers need to study their behavior patterns to make sense of their activities. Now researchers at Tel Aviv University are using these zoological methods to study people with serious mental disorders.

Prof. David Eilam of TAU’s Zoology Department at The George S. Wise Faculty of Life Sciences recorded patients with obsessive-compulsive disorder and “schizo-OCD” — which combines symptoms of schizophrenia and OCD — as they performed basic tasks. By analyzing the patients’ movements, they were able to identify similarities and differences between two frequently confused disorders.

Published in the journal CNS Spectrums, the research represents a step toward resolving a longstanding question about the nature of schizo-OCD: Is it a combination of OCD and schizophrenia, or a variation of just one of the disorders?

The researchers concluded that schizo-OCD is a combination of the two disorders. They noted that the behavioral differences identified in the study could be used to help diagnose patients with OCD and other obsessive-compulsive disorders, including schizo-OCD.

The taxonomy of mental disorders

"I realized my methodology for studying rat models could be directly applied to work with humans with mental disorders," Prof. Eilam said. "Behavior is the ultimate output of the nervous system, and my team and I are experts in the fine-grained analysis of behavior, be it of humans or of other animals."

The main features of OCD are, of course, obsessions and compulsions. Obsessions are recurring and persistent thoughts, impulses, or images that are experienced as intrusive and unwanted and cause marked distress or anxiety. In contrast, compulsions are repetitive motor behaviors, such as counting, that occur in response to obsessions and are performed according to strictly applied rules. Schizophrenia is marked by delusions, hallucinations, disorganized speech, abnormal motor behavior, and diminished emotional expression, among other symptoms.

Eilam and graduate student Anat Gershoni of the Zoology Department and Prof. Haggai Hermesh of TAU’s Sackler Faculty of Medicine set out with Dr. Naomi Fineberg of the Queen Elizabeth II Hospital in England to resolve the controversy. To this end, they recorded and compared videos of diagnosed OCD and schizo-OCD patients performing 10 different mundane tasks, like leaving home, making tea, or cleaning a table. The patients met the criteria of the widely used Diagnostic and Statistical Manual of Mental Disorders.

A matter of space

The researchers found that both OCD and schizo-OCD patients exhibited OCD-like behavior in performing the tasks, excessively repeating and adding actions. But schizo-OCD patients additionally acted like schizophrenics.

For a typical OCD patient in the study, the task of leaving home involved standing in one place and repeatedly checking the contents of his pockets before finally taking his keys and cell phone and going to the door. In contrast, a typical schizo-OCD patient traveled around the apartment — switching the lights in the bathroom on and off, then taking his keys and phone to the door, going to scan the bedroom, then taking his keys and phone to the door, going to empty the ashtray, then taking his keys and phone to the door and so on. A typical healthy person would simply pick up his keys and phone and walk out.

Overall, the researchers found that the level of obsessive-compulsive behavior was the same in OCD and schizo-OCD patients. This suggests that both types of patients had the difficulty shifting attention from one task to another that helps define OCD. The schizo-OCD patients, though, did more divergent activity over a larger area than did OCD patients. This suggests that the schizo-OCD patients were continuously shifting attention, which happens in schizophrenia but not OCD.

"While the obsessive compulsive is obsessed with one idea; the schizophrenic’s mind is drifting," said Eilam. "We found that this is reflected in their paths of locomotion. So instead of tracking the thoughts of the patients, we can simply trace their paths of locomotion."

Eilam plans to conduct research comparing repetitive behavior in OCD and autism patients.

Filed under schizophrenia OCD mental disorders compulsive behavior neuroscience psychology science

131 notes

Primate calls, like human speech, can help infants form categories
Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.
Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.
Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.
"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.
In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.
Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.
Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”
"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.
Waxman said these new results open the door to new research questions.
"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"
(Image: Corbis)

Primate calls, like human speech, can help infants form categories

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"

(Image: Corbis)

Filed under primates vocalizations language categorization psychology neuroscience science

148 notes

Striking Patterns: Skill for Forming Tools and Words Evolved Together



When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.
Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.
In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.
The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.
In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.
The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.
The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”
Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.
That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”
Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.
Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Striking Patterns: Skill for Forming Tools and Words Evolved Together

When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.

Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.

In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.

The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.

In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.

The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.

The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”

Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.

That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”

Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.

Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.

Filed under language toolmaking tool use brain activity blood flow evolution neuroscience psychology science

147 notes

Left brain, right brain: Different patterns of cortical interaction
The human brain is divided into two hemispheres – left and right – in which neural functions are said to be lateralized. (For example, language and motor abilities are associated with the left hemisphere, and visuospatial attention with the right.) Although hemispheric lateralization is generally thought to benefit brain function, relationships between lateralization degree and functioning levels have not been quantified. Recently, however, scientists at the National Institutes of Health in Bethesda, MD demonstrated that the two hemispheres have qualitatively different biases: the left prefers to interact with itself – especially for regions associated with language and fine motor coordination – while the right visuospatial and attentional processing regions interact with both hemispheres. Moreover, the researchers provided direct evidence that an individual’s degree of  lateralization is associated with enhanced cognitive ability.
Dr. Stephen J. Gotts spoke with Medical Xpress about the research that he, Dr. Hang Joon Jo, and Dr. Alex Martin and colleagues conducted – and the challenges they faced in so doing. “One of the tricky things about studying lateralization of function is that it’s hard to know exactly which points in the two hemispheres are correspondent.” Gotts tells Medical Xpress. This is the case, he explains, because while the hemispheres are roughly symmetrical, there are idiosyncratic differences in cortical folding between left and right for any given individual. In addition, he notes, the exact location of particular folds (known as gyri) varies across individuals.

"Neuroimaging studies have historically adopted a couple of different approaches to deal with this situation," Gotts explains. Some studies, he illustrates, transform the geometry of the brain for each individual into a so-called standard three-dimensional coordinate reference brain – for example, the Talairach-Tournoux atlas. This allows them to estimate symmetrical corresponding points by flipping the left/right x-coordinate about zero. However, he acknowledges that this technique is prone to error by as much as 1-2 centimeters in some brain locations.

"Another approach," Gotts continues, "has been to compare the magnitude of the neural response in each hemisphere during the performance of a task – for example, a language comprehension task – and calculate a quantitative laterality index to enumerate the extent of lateralization. While this approach makes a lot of sense, and doesn’t necessarily require one to solve the correspondence problem, it will be strictly limited to the brain areas that can be activated by the task.” In other words, if an area isn’t engaged by the task, it’s hard to know whether or not it’s lateralized. Moreover, it requires many different tasks to be selected in order to address the spatial scope of the entire brain – and Gotts points out that this hasn’t been carried out to date.

"Our solution addressed the correspondence problem more directly," Gotts says. The scientists first flattened out a model of each individual’s folded cortex onto a smooth surface, spatially warping and stretching each individual brain so that each cortical landmark – that is, gyrus or sulcus – was aligned across individuals. They then found corresponding points in the two hemispheres by their position on this standardized, flattened surface relative to the full set of cortical landmarks. (Sulci are depressions or fissures in the surface of the brain surrounding the gyri.) "Applying the same spatial warping to the functional data then allowed us to compare ongoing, resting brain dynamics between the hemispheres at every position on the cortical surface," Gotts explains.
Utilizing a more traditional, task-based approach to measuring laterality has another downside: researchers typically assess the average magnitude of neural response to a task condition across many individual stimulus events, meaning that dynamical interactions of brain areas aren’t as easily assessed. “It’s not impossible,” notes Gotts, “but to eliminate the effects of stimulus artifacts on connectivity estimates, it requires particular choices of neuroimaging task timing – and it’s been done a lot less often than magnitude estimation. The qualitative distinction that we observed in our study between how the hemispheres interact with one another really requires the examination of time-varying neural responses and their co-variation. I don’t think that you’d be able to anticipate this finding solely from examining average activity levels.”
With respect to the correlations with behavioral ability, Gotts points out that there are probably many different tasks that one could have chosen. “Our choice was to use tasks that have been well-studied and well-normed across individuals as part of the Wechsler intelligence scales – specifically, Vocabulary, which is correlated with many aspects of verbal abilities, and Block Design and Matrix Reasoning, which index aspects of visuospatial processing. These obviously aren’t the only possible choices, and it would be nice to follow up this work with a more thorough battery of tasks that would allow us to examine more detailed aspects of language, fine motor control, and visuospatial abilities.”

It is important to point out, Gotts adds, that there have been several previous task-based studies that have examined the relationships between lateralization magnitude and cognitive ability, with some reporting a direct relationship as their current study shows. “The main contribution of our study is to demonstrate, at a whole-brain scope, the qualitative differences between the hemispheres in their within- and between-hemisphere interactions. The correlations with behavioral ability really hammer this distinction home, since one needed to use the appropriate metric – that is, segregation versus integration – to see these correlations.”

One of the interesting things about the distinction between the hemispheres that the scientists observed, Gotts notes, is that there are implicit hints about it in the literatures on individual cognitive domains. “When people discuss language lateralization, the notion is more like classic modularity: language is operating in the left hemisphere in a manner somewhat isolated, or segregated, from the right hemisphere. This notion may come in large part from the neuropsychological literature, which shows that brain damage to the left hemisphere is much more likely to cause aphasia than damage to the right hemisphere in right-handed individuals.

In contrast, Gotts continues, visuospatial processing and attention involves coordinated processing across the entire visual field, with the left and right halves of visual space represented separately in the right and left occipital cortex, respectively. “Visual processing over the entire visual field requires inter-hemispheric integration, and integration and/or control relates to visuospatial attentional control that is more right-hemisphere lateralized. “Our findings highlight this implicit distinction, making it more explicit and showing that the respective cognitive abilities benefit from it. As a field, I think that we’ve always assumed that hemispheric lateralization was somehow beneficial for function, but very few brain imaging studies have even examined the issue directly, much less at a whole-brain scope across the range of cognitive domains known to be lateralized.”

Moving forward, says Gotts, one of the key outstanding questions is: What is the developmental time course of these hemispheric differences? That is, does the left hemisphere bias for self-interaction exist prior to skilled motor control and language function – or does it emerge later as a consequence of these functions? “If it were to exist prior to handedness and language acquisition in the first few months of age, or even in utero, then the bias could plausibly serve as the cause of the preferential left-lateralization of these functions. One could even try to predict the degree of lateralization present later in life during various tasks, or when at rest, from estimates measured early in life.”

A similar set of questions exists for the domain of visuospatial function and the right-hemisphere bias for bilateral interaction, Gotts adds. “Because our method for assessing lateralization only requires measuring resting brain activity and not the performance of complex cognitive tasks, these experiments are actually possible to perform with young infants in a reasonably parallel manner.”

According to Gotts, another crucial question for the field of human neuroscience is: What changed from monkeys to apes to humans with respect to lateralization? “Several decades ago, there was the suggestion that monkeys exhibit hand preferences like the ones humans exhibit. After much research, it became clear that monkeys are more symmetrical in their brain control of both motor and visuospatial function. However, apes – such as chimpanzees – appear to be a different story. They appear to exhibit some hand preference lateralization with accompanying brain lateralization, although perhaps not to the extremes to which humans do.” (Roughly 80-90% of human males and females are right-handed.) “As with infants, resting brain scans can be performed on monkeys and chimpanzees in a manner similar to those conducted on adult humans.”

Regarding other areas of research that might benefit from this study, Gotts thinks it would be possible to apply their methods for assessing lateralization to a range of psychiatric disorders, such as autism and schizophrenia. “There’s some suggestion in the literature that lateralization of function is altered in these disorders. Is lateralization qualitatively different from the hemispheric biases we demonstrate for typical individuals – or do they differ in magnitude? We’d also like to understand more about the relationship between handedness and cognitive ability.’
Being left-handed, he illustrates, is associated with a more bilateral representation of language – but this doesn’t appear to mandate poorer cognitive abilities in left-handed individuals. “It may be that in left-handed individuals a different optimal weighting or balance of power between the hemispheres is achieved which differs from what we’ve observed in right-handed males,” Gotts concludes. “Our methods could certainly be applied to examine this set of issues.”

Left brain, right brain: Different patterns of cortical interaction

The human brain is divided into two hemispheres – left and right – in which neural functions are said to be lateralized. (For example, language and motor abilities are associated with the left hemisphere, and visuospatial attention with the right.) Although hemispheric lateralization is generally thought to benefit brain function, relationships between lateralization degree and functioning levels have not been quantified. Recently, however, scientists at the National Institutes of Health in Bethesda, MD demonstrated that the two hemispheres have qualitatively different biases: the left prefers to interact with itself – especially for regions associated with language and fine motor coordination – while the right visuospatial and attentional processing regions interact with both hemispheres. Moreover, the researchers provided direct evidence that an individual’s degree of lateralization is associated with enhanced cognitive ability.

Dr. Stephen J. Gotts spoke with Medical Xpress about the research that he, Dr. Hang Joon Jo, and Dr. Alex Martin and colleagues conducted – and the challenges they faced in so doing. “One of the tricky things about studying lateralization of function is that it’s hard to know exactly which points in the two hemispheres are correspondent.” Gotts tells Medical Xpress. This is the case, he explains, because while the hemispheres are roughly symmetrical, there are idiosyncratic differences in cortical folding between left and right for any given individual. In addition, he notes, the exact location of particular folds (known as gyri) varies across individuals.

"Neuroimaging studies have historically adopted a couple of different approaches to deal with this situation," Gotts explains. Some studies, he illustrates, transform the geometry of the brain for each individual into a so-called standard three-dimensional coordinate reference brain – for example, the Talairach-Tournoux atlas. This allows them to estimate symmetrical corresponding points by flipping the left/right x-coordinate about zero. However, he acknowledges that this technique is prone to error by as much as 1-2 centimeters in some brain locations.

"Another approach," Gotts continues, "has been to compare the magnitude of the neural response in each hemisphere during the performance of a task – for example, a language comprehension task – and calculate a quantitative laterality index to enumerate the extent of lateralization. While this approach makes a lot of sense, and doesn’t necessarily require one to solve the correspondence problem, it will be strictly limited to the brain areas that can be activated by the task.” In other words, if an area isn’t engaged by the task, it’s hard to know whether or not it’s lateralized. Moreover, it requires many different tasks to be selected in order to address the spatial scope of the entire brain – and Gotts points out that this hasn’t been carried out to date.

"Our solution addressed the correspondence problem more directly," Gotts says. The scientists first flattened out a model of each individual’s folded cortex onto a smooth surface, spatially warping and stretching each individual brain so that each cortical landmark – that is, gyrus or sulcus – was aligned across individuals. They then found corresponding points in the two hemispheres by their position on this standardized, flattened surface relative to the full set of cortical landmarks. (Sulci are depressions or fissures in the surface of the brain surrounding the gyri.) "Applying the same spatial warping to the functional data then allowed us to compare ongoing, resting brain dynamics between the hemispheres at every position on the cortical surface," Gotts explains.

Utilizing a more traditional, task-based approach to measuring laterality has another downside: researchers typically assess the average magnitude of neural response to a task condition across many individual stimulus events, meaning that dynamical interactions of brain areas aren’t as easily assessed. “It’s not impossible,” notes Gotts, “but to eliminate the effects of stimulus artifacts on connectivity estimates, it requires particular choices of neuroimaging task timing – and it’s been done a lot less often than magnitude estimation. The qualitative distinction that we observed in our study between how the hemispheres interact with one another really requires the examination of time-varying neural responses and their co-variation. I don’t think that you’d be able to anticipate this finding solely from examining average activity levels.”

With respect to the correlations with behavioral ability, Gotts points out that there are probably many different tasks that one could have chosen. “Our choice was to use tasks that have been well-studied and well-normed across individuals as part of the Wechsler intelligence scales – specifically, Vocabulary, which is correlated with many aspects of verbal abilities, and Block Design and Matrix Reasoning, which index aspects of visuospatial processing. These obviously aren’t the only possible choices, and it would be nice to follow up this work with a more thorough battery of tasks that would allow us to examine more detailed aspects of language, fine motor control, and visuospatial abilities.”

It is important to point out, Gotts adds, that there have been several previous task-based studies that have examined the relationships between lateralization magnitude and cognitive ability, with some reporting a direct relationship as their current study shows. “The main contribution of our study is to demonstrate, at a whole-brain scope, the qualitative differences between the hemispheres in their within- and between-hemisphere interactions. The correlations with behavioral ability really hammer this distinction home, since one needed to use the appropriate metric – that is, segregation versus integration – to see these correlations.”

One of the interesting things about the distinction between the hemispheres that the scientists observed, Gotts notes, is that there are implicit hints about it in the literatures on individual cognitive domains. “When people discuss language lateralization, the notion is more like classic modularity: language is operating in the left hemisphere in a manner somewhat isolated, or segregated, from the right hemisphere. This notion may come in large part from the neuropsychological literature, which shows that brain damage to the left hemisphere is much more likely to cause aphasia than damage to the right hemisphere in right-handed individuals.

In contrast, Gotts continues, visuospatial processing and attention involves coordinated processing across the entire visual field, with the left and right halves of visual space represented separately in the right and left occipital cortex, respectively. “Visual processing over the entire visual field requires inter-hemispheric integration, and integration and/or control relates to visuospatial attentional control that is more right-hemisphere lateralized. “Our findings highlight this implicit distinction, making it more explicit and showing that the respective cognitive abilities benefit from it. As a field, I think that we’ve always assumed that hemispheric lateralization was somehow beneficial for function, but very few brain imaging studies have even examined the issue directly, much less at a whole-brain scope across the range of cognitive domains known to be lateralized.”

Moving forward, says Gotts, one of the key outstanding questions is: What is the developmental time course of these hemispheric differences? That is, does the left hemisphere bias for self-interaction exist prior to skilled motor control and language function – or does it emerge later as a consequence of these functions? “If it were to exist prior to handedness and language acquisition in the first few months of age, or even in utero, then the bias could plausibly serve as the cause of the preferential left-lateralization of these functions. One could even try to predict the degree of lateralization present later in life during various tasks, or when at rest, from estimates measured early in life.”

A similar set of questions exists for the domain of visuospatial function and the right-hemisphere bias for bilateral interaction, Gotts adds. “Because our method for assessing lateralization only requires measuring resting brain activity and not the performance of complex cognitive tasks, these experiments are actually possible to perform with young infants in a reasonably parallel manner.”

According to Gotts, another crucial question for the field of human neuroscience is: What changed from monkeys to apes to humans with respect to lateralization? “Several decades ago, there was the suggestion that monkeys exhibit hand preferences like the ones humans exhibit. After much research, it became clear that monkeys are more symmetrical in their brain control of both motor and visuospatial function. However, apes – such as chimpanzees – appear to be a different story. They appear to exhibit some hand preference lateralization with accompanying brain lateralization, although perhaps not to the extremes to which humans do.” (Roughly 80-90% of human males and females are right-handed.) “As with infants, resting brain scans can be performed on monkeys and chimpanzees in a manner similar to those conducted on adult humans.”

Regarding other areas of research that might benefit from this study, Gotts thinks it would be possible to apply their methods for assessing lateralization to a range of psychiatric disorders, such as autism and schizophrenia. “There’s some suggestion in the literature that lateralization of function is altered in these disorders. Is lateralization qualitatively different from the hemispheric biases we demonstrate for typical individuals – or do they differ in magnitude? We’d also like to understand more about the relationship between handedness and cognitive ability.’

Being left-handed, he illustrates, is associated with a more bilateral representation of language – but this doesn’t appear to mandate poorer cognitive abilities in left-handed individuals. “It may be that in left-handed individuals a different optimal weighting or balance of power between the hemispheres is achieved which differs from what we’ve observed in right-handed males,” Gotts concludes. “Our methods could certainly be applied to examine this set of issues.”

Filed under brain lateralization brain hemispheres cognitive ability psychology neuroscience science

90 notes

Why We Look At The Puppet, Not The Ventriloquist

The brain doesn’t require simultaneous visual and audio stimulation to locate the source of a sound

image

As ventriloquists have long known, your eyes can sometimes tell your brain where a sound is coming from more convincingly than your ears can.

A series of experiments in humans and monkeys by Duke University researchers has found that the brain does not require simultaneous visual and audio stimulation to locate the source of a sound. Rather, visual feedback obtained from trying to find a sound with the eyes had a stronger effect than visual stimuli presented at the same time as the audio, according to the Duke study.

The findings could help those with mild hearing loss learn to localize voices better, improving their ability to communicate in noisy environments, said Jennifer Groh, a professor of psychology and neuroscience at Duke.

Locating where a sound is coming from is partially learned with the aid of vision. Researchers sought to learn more about how the brain locates the source of a sound when the source is unclear and there are a number of possible visual matches.

"Our study is related to ventriloquism, in which the visual image of a puppet’s mouth ‘captures’ the sound of the puppeteer’s voice," Groh said. "It is thought that one reason this illusion occurs is because vision normally teaches the brain how to tell where sounds are coming from. We investigated how the brain knows which visual stimulus should capture the location of a sound, such as why it is the puppet’s mouth and not some other visual stimulus."

The study, which appears Thursday (Aug. 29) in the journal PLOS ONE, tested two competing hypotheses. In one, the brain determines the location of a sound based on the simultaneous occurrence of audio and its visual source. In the other, the brain uses a “guess and check” method. In this scenario, visual feedback sent to the brain after the eye focuses on a sound affects how the eye searches for that sound in the future, possibly through the brain’s reward-related circuitry.

In both paradigms, the visual stimulus — an LED — was displaced from the sound. Groh’s team then looked for evidence that the LED caused a persistent mislocation of the sound.

"Surprisingly, we found that visual feedback exerts the more powerful effect on altering localization of sounds," Groh said. "This suggests that the active behavior of looking at the puppet during a ventriloquism performance plays a role in causing the shift in where you hear the voice."

Participants in the study — 11 humans  and two rhesus monkeys — shifted their sight to a sound under different visual and audio scenarios.

In one scenario, called the “synchrony-only” task, a visual stimulus appeared at the same time as a sound but too briefly to provide feedback after an eye movement to that sound.

In another, the “feedback-only” task, the visual stimulus appeared during the execution of an eye movement to a sound, but was never on at the same time as the sound.

The study found that the “feedback-only task” exerted a much more powerful effect on the estimation of sound location, as measured with eye tracking, than did the other scenario. This suggests that those who have difficulty localizing sounds may benefit from practice involving eye movements.

On average, participants altered their eye movements in the direction of the lights’ location to a greater degree, about a quarter of the way, when the visual stimulus was presented as feedback than when it was presented at the same time as the sound, the study found.

"This is about the brain’s self-improvement skills," said co-author Daniel Pages, a graduate student in Psychology & Neuroscience at Duke. "What we’re getting at is how the brain uses different types of information to improve how it does its job. In this case, it uses vision coupled with eye movements to improve hearing."

"We were surprised at how important the eye movements were," Groh said. "But finding sounds is really hard. Feedback about your performance is important for anything that is difficult, whether it is the B- you get on your homework or the error your eyes detect in localizing a sound."

(Source: today.duke.edu)

Filed under eye movements visual stimulus hearing loss sound location neuroscience psychology science

171 notes

Brain imaging study reveals the wandering mind behind insomnia

Study is the first to find functional MRI differences in working memory in people with primary insomnia

image

A new brain imaging study may help explain why people with insomnia often complain that they struggle to concentrate during the day even when objective evidence of a cognitive problem is lacking.

"We found that insomnia subjects did not properly turn on brain regions critical to a working memory task and did not turn off ‘mind-wandering’ brain regions irrelevant to the task," said lead author Sean P.A. Drummond, PhD, associate professor in the department of psychiatry at the University of California, San Diego, and the VA San Diego Healthcare System, and Secretary/Treasurer of the Sleep Research Society. "Based on these results, it is not surprising that someone with insomnia would feel like they are working harder to do the same job as a healthy sleeper."

The research team led by Drummond and co-principal investigator Matthew Walker, PhD, studied 25 people with primary insomnia and 25 good sleepers. Participants had an average age of 32 years. The study subjects underwent a functional magnetic resonance imaging scan while performing a working memory task.

Results published in the September issue of the journal Sleep show that participants with insomnia did not differ from good sleepers in objective cognitive performance on the working memory task. However, the MRI scans revealed that people with insomnia could not modulate activity in brain regions typically used to perform the task.

As the task got harder, good sleepers used more resources within the working memory network of the brain, especially the dorsolateral prefrontal cortex. Insomnia subjects, however, were unable to recruit more resources in these brain regions. Furthermore, as the task got harder, participants with insomnia did not dial down the “default mode” regions of the brain that are normally only active when our minds are wandering.

"The data help us understand that people with insomnia not only have trouble sleeping at night, but their brains are not functioning as efficiently during the day," said Drummond. "Some aspects of insomnia are as much of a daytime problem as a nighttime problem. These daytime problems are associated with organic, measurable abnormalities of brain activity, giving us a biological marker for treatment success."

According to the authors, the study is the largest to examine cerebral activation with functional MRI during cognitive performance in people with primary insomnia, relative to well-matched good sleepers. It also is the first to characterize functional MRI differences in working memory in people with primary insomnia.

The American Academy of Sleep Medicine reports that about 10 to 15 percent of adults have an insomnia disorder with distress or daytime impairment. Most often insomnia is a comorbid disorder occurring with another problem such as depression or chronic pain, or caused by a medication or substance. Fewer people suffering from insomnia are considered to have primary insomnia, which is defined as a difficulty falling asleep or maintaining sleep in the absence of a coexisting condition.

(Source: eurekalert.org)

Filed under insomnia working memory cognitive performance prefrontal cortex neuroscience psychology science

113 notes

Study Shows that Intensity of Facebook Use Can Be Predicted by Reward-related Activity in the Brain
Neuroscientists at Freie Universität Berlin show a link between reward activity in the brain due to discovering one has a good reputation and social media use
A person’s intensity of Facebook use can be predicted by activity in the nucleus accumbens, a reward-related area of the brain, according to a new study published by neuroscientists in the Languages of Emotion Cluster of Excellence at Freie Universität Berlin. Dr. Dar Meshi and his colleagues conducted this first ever study to relate brain activity (functional MRI) to social media use. The study was published in the latest issue of the open-access journal Frontiers in Human Neuroscience.
The researchers focused on the nucleus accumbens, a small but critical structure located deep in the center of the brain, because previous research has shown that rewards —including food, money, sex, and gains in reputation — are processed in this region.
“As human beings, we evolved to care about our reputation. In today’s world, one way we’re able to manage our reputation is by using social media websites like Facebook,” says Dar Meshi, lead author of the paper. Facebook is the world’s largest social media channel with 1.2 billion monthly active users. It was used in the study because interactions on the website are carried out in view of the user’s friends or public and can affect their reputation. For example, Facebook consists of users “liking” posted information. This approval is positive social feedback, and can be considered related to their reputation.
All 31 participants completed the Facebook Intensity Scale to determine how many friends each participant had, how many minutes they each spent on Facebook, and general thoughts. The participants were selected to vary widely in their Facebook Intensity Scale scores.
First, the subjects participated in a video interview. Next, the brain activity of the subjects was recorded, by using functional magnetic resonance imaging, in different situations. In the scanner, subjects were told whether people who supposedly viewed the video interview thought highly of them, and subjects also found out whether people thought highly of another person. They also performed a card task to win money.
Results showed that participants who received positive feedback about themselves produced stronger activation of the nucleus accumbens than when they saw the positive feedback that another person received. The strength of this difference corresponded to participants’ reported intensity of Facebook use. But the nucleus accumbens response to monetary reward did not predict Facebook use.
“Our study reveals that the processing of social gains in reputation in the left nucleus accumbens predicts the intensity of Facebook use across individuals,” says Meshi. “These findings expand upon our present knowledge of nucleus accumbens function as it relates to complex human behavior.”
Regarding the potential for social media addiction and the effects of social media on education quality, these results may provide important motivation for clinical research and for further research on learning. As Meshi says, “Our findings relating individual social media use to the individual response of the brain’s reward system may also be relevant for both educational and clinical research in the future.” The authors point out, however, that their results do not determine if positive social feedback drives people to interact on social media, or if sustained use of social media changes the way positive social feedback is processed by the brain.

Study Shows that Intensity of Facebook Use Can Be Predicted by Reward-related Activity in the Brain

Neuroscientists at Freie Universität Berlin show a link between reward activity in the brain due to discovering one has a good reputation and social media use

A person’s intensity of Facebook use can be predicted by activity in the nucleus accumbens, a reward-related area of the brain, according to a new study published by neuroscientists in the Languages of Emotion Cluster of Excellence at Freie Universität Berlin. Dr. Dar Meshi and his colleagues conducted this first ever study to relate brain activity (functional MRI) to social media use. The study was published in the latest issue of the open-access journal Frontiers in Human Neuroscience.

The researchers focused on the nucleus accumbens, a small but critical structure located deep in the center of the brain, because previous research has shown that rewards —including food, money, sex, and gains in reputation — are processed in this region.

“As human beings, we evolved to care about our reputation. In today’s world, one way we’re able to manage our reputation is by using social media websites like Facebook,” says Dar Meshi, lead author of the paper. Facebook is the world’s largest social media channel with 1.2 billion monthly active users. It was used in the study because interactions on the website are carried out in view of the user’s friends or public and can affect their reputation. For example, Facebook consists of users “liking” posted information. This approval is positive social feedback, and can be considered related to their reputation.

All 31 participants completed the Facebook Intensity Scale to determine how many friends each participant had, how many minutes they each spent on Facebook, and general thoughts. The participants were selected to vary widely in their Facebook Intensity Scale scores.

First, the subjects participated in a video interview. Next, the brain activity of the subjects was recorded, by using functional magnetic resonance imaging, in different situations. In the scanner, subjects were told whether people who supposedly viewed the video interview thought highly of them, and subjects also found out whether people thought highly of another person. They also performed a card task to win money.

Results showed that participants who received positive feedback about themselves produced stronger activation of the nucleus accumbens than when they saw the positive feedback that another person received. The strength of this difference corresponded to participants’ reported intensity of Facebook use. But the nucleus accumbens response to monetary reward did not predict Facebook use.

“Our study reveals that the processing of social gains in reputation in the left nucleus accumbens predicts the intensity of Facebook use across individuals,” says Meshi. “These findings expand upon our present knowledge of nucleus accumbens function as it relates to complex human behavior.”

Regarding the potential for social media addiction and the effects of social media on education quality, these results may provide important motivation for clinical research and for further research on learning. As Meshi says, “Our findings relating individual social media use to the individual response of the brain’s reward system may also be relevant for both educational and clinical research in the future.” The authors point out, however, that their results do not determine if positive social feedback drives people to interact on social media, or if sustained use of social media changes the way positive social feedback is processed by the brain.

Filed under nucleus accumbens social reward social media facebook reputation psychology neuroscience science

335 notes

Learning a new language alters brain development

The age at which children learn a second language can have a significant bearing on the structure of their adult brain, according to a new joint study by the Montreal Neurological Institute and Hospital - The Neuro at McGill University and Oxford University. The majority of people in the world learn to speak more than one language during their lifetime. Many do so with great proficiency particularly if the languages are learned simultaneously or from early in development.

image

The study concludes that the pattern of brain development is similar if you learn one or two language from birth. However, learning a second language later on in childhood after gaining proficiency in the first (native) language does in fact modify the brain’s structure, specifically the brain’s inferior frontal cortex. The left inferior frontal cortex became thicker and the right inferior frontal cortex became thinner. The cortex is a multi-layered mass of neurons that plays a major role in cognitive functions such as thought, language, consciousness and memory.

The study suggests that the task of acquiring a second language after infancy stimulates new neural growth and connections among neurons in ways seen in acquiring complex motor skills such as juggling. The study’s authors speculate that the difficulty that some people have in learning a second language later in life could be explained at the structural level.

“The later in childhood that the second language is acquired, the greater are the changes in the inferior frontal cortex,” said Dr. Denise Klein, researcher in The Neuro’s Cognitive Neuroscience Unit and a lead author on the paper published in the journal Brain and Language. “Our results provide structural evidence that age of acquisition is crucial in laying down the structure for language learning.”

Using a software program developed at The Neuro, the study examined MRI scans of 66 bilingual and 22 monolingual men and women living in Montreal. The work was supported by a grant from the Natural Science and Engineering Research Council of Canada and from an Oxford McGill Neuroscience Collaboration Pilot project.

(Source: mcgill.ca)

Filed under brain development language frontal cortex cognitive function neuroscience psychology science

321 notes

Poor concentration: Poverty reduces brainpower needed for navigating other areas of life
Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes.
Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.
In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.
But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.
"These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention," said Zhao, who is now an assistant professor of psychology at the University of British Columbia.
"Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success," she said. "We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty."
The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.
"Stress itself doesn’t predict that people can’t perform well — they may do better up to a point," Shafir said. "A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly."
The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.
"They can make the same mistakes, but the outcomes of errors are more dear," Shafir said. "So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out."
The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.
Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.
To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.
The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.
"These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth," Zhao said. "Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind."
"We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources," Shafir added. "It’s not about being a poor person — it’s about living in poverty."
Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.
"When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting," Shafir said. "It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity."
The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.
"You want to design a context that is more scarcity proof," said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.
"There’s very little you can do with time to get more money, but a lot you can do with money to get more time," Shafir said. "The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help."

Poor concentration: Poverty reduces brainpower needed for navigating other areas of life

Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes.

Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.

In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.

But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.

"These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention," said Zhao, who is now an assistant professor of psychology at the University of British Columbia.

"Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success," she said. "We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty."

The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.

"Stress itself doesn’t predict that people can’t perform well — they may do better up to a point," Shafir said. "A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly."

The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.

"They can make the same mistakes, but the outcomes of errors are more dear," Shafir said. "So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out."

The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.

Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.

To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.

The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.

"These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth," Zhao said. "Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind."

"We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources," Shafir added. "It’s not about being a poor person — it’s about living in poverty."

Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.

"When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting," Shafir said. "It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity."

The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.

"You want to design a context that is more scarcity proof," said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.

"There’s very little you can do with time to get more money, but a lot you can do with money to get more time," Shafir said. "The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help."

Filed under poverty cognitive function cognitive performance psychology neuroscience science

1,506 notes

Size of personal space is affected by anxiety
The space surrounding the body (known by scientists as ‘peripersonal space’), which has previously been thought of as having a gradual boundary, has been given physical limits by new research into the relationship between anxiety and personal space.
New findings have allowed scientists to define the limit of the ‘peripersonal space’ surrounding the face as 20-40cm away. The study is published today in The Journal of Neuroscience.
As well as having numerical limits the specific distance was found to vary between individuals. Those with anxiety traits were found to have larger peripersonal space.
In an experiment, Dr Chiara Sambo and Dr Giandomenico Iannetti from UCL recorded the blink reflex - a defensive response to potentially dangerous stimuli at varying distances from subject’s face. They then compared the reflex data to the results of an anxiety test where subjects rated their levels of anxiety in various situations.
Those who scored highly on the anxiety test tended to react more strongly to stimuli 20cm from their face than subjects who got low scores on the anxiety test. Researchers classified those who reacted more strongly to further away stimuli as having a large ‘defensive peripersonal space’ (DPPS).
A larger DPPS means that those with high anxiety scores perceive threats as closer than non-anxious individuals when the stimulus is the same distance away. The research has led scientists to think that the brain controls the strength of defensive reflexes even though it cannot initiate them.
Dr Giandomenico Iannetti (UCL Neuroscience, Physiology and Pharmacology), lead author of the study, said: “This finding is the first objective measure of the size of the area surrounding the face that each individual considers at high-risk, and thus wants to protect through the most effective defensive motor responses.”
In the experiment, a group of 15 people aged 20 to 37 were chosen for study. Researchers applied an intense electrical stimulus to a specific nerve in the hand which causes the subject to blink. This is called the hand-blink reflex (HBR) which is not under conscious control of the brain.
This reflex was monitored with the subject holding their own hand at 4, 20, 40 and 60 cm away from the face. The magnitude of the reflex was used to determine how dangerous each stimulus was considered, and a larger response for stimuli further from the body indicated a larger DPPS.
Subjects also completed an anxiety test in which they self-scored their predicted level of anxiety in different situations. The results of this test were used to classify individuals as more or less anxious, and were compared to the data from the reflex experiment to determine if there was a link between the two tests.
Scientists hope that the findings can be used as a test to link defensive behaviours to levels of anxiety. This could be particularly useful determining risk assessment ability in those with jobs that encounter dangerous situations such as fire, police and military officers.

Size of personal space is affected by anxiety

The space surrounding the body (known by scientists as ‘peripersonal space’), which has previously been thought of as having a gradual boundary, has been given physical limits by new research into the relationship between anxiety and personal space.

New findings have allowed scientists to define the limit of the ‘peripersonal space’ surrounding the face as 20-40cm away. The study is published today in The Journal of Neuroscience.

As well as having numerical limits the specific distance was found to vary between individuals. Those with anxiety traits were found to have larger peripersonal space.

In an experiment, Dr Chiara Sambo and Dr Giandomenico Iannetti from UCL recorded the blink reflex - a defensive response to potentially dangerous stimuli at varying distances from subject’s face. They then compared the reflex data to the results of an anxiety test where subjects rated their levels of anxiety in various situations.

Those who scored highly on the anxiety test tended to react more strongly to stimuli 20cm from their face than subjects who got low scores on the anxiety test. Researchers classified those who reacted more strongly to further away stimuli as having a large ‘defensive peripersonal space’ (DPPS).

A larger DPPS means that those with high anxiety scores perceive threats as closer than non-anxious individuals when the stimulus is the same distance away. The research has led scientists to think that the brain controls the strength of defensive reflexes even though it cannot initiate them.

Dr Giandomenico Iannetti (UCL Neuroscience, Physiology and Pharmacology), lead author of the study, said: “This finding is the first objective measure of the size of the area surrounding the face that each individual considers at high-risk, and thus wants to protect through the most effective defensive motor responses.”

In the experiment, a group of 15 people aged 20 to 37 were chosen for study. Researchers applied an intense electrical stimulus to a specific nerve in the hand which causes the subject to blink. This is called the hand-blink reflex (HBR) which is not under conscious control of the brain.

This reflex was monitored with the subject holding their own hand at 4, 20, 40 and 60 cm away from the face. The magnitude of the reflex was used to determine how dangerous each stimulus was considered, and a larger response for stimuli further from the body indicated a larger DPPS.

Subjects also completed an anxiety test in which they self-scored their predicted level of anxiety in different situations. The results of this test were used to classify individuals as more or less anxious, and were compared to the data from the reflex experiment to determine if there was a link between the two tests.

Scientists hope that the findings can be used as a test to link defensive behaviours to levels of anxiety. This could be particularly useful determining risk assessment ability in those with jobs that encounter dangerous situations such as fire, police and military officers.

Filed under peripersonal space defensive peripersonal space anxiety neuroscience psychology science

free counters