Neuroscience

Articles and news from the latest research reports.

Posts tagged bias

83 notes

Survival reflex sparks male perception bias, study finds

You glimpse a stranger standing in the street. The light is hazy and the person’s face and clothing are indistinct. Who is it? Chances are you will think it is a man—and the reason for this is a survival reflex, according to an unusual study published on Wednesday.

Psychologists at the University of California at Los Angeles delved into our quest for visual clues when we assess other people.

They asked male and female students to look at 21 human silhouettes, all of them the same height, but with a progressively changing waist-to-hip ratio. The figures began with an obviously female “hourglass” figure and, after incremental changes, ended with an obviously male “hunk” figure. The volunteers were asked to say whether each of the 21 silhouettes was male or female, the idea being to identify the point where they saw a shift in gender.

What was striking, said researcher Kerri Johnson, was a preference for the volunteers to deem a shape to be a man whenever it was ambiguous—or could readily have been taken for a woman. “I was surprised by the size of the effect. It was a much stronger effect than I ever imagined,” Johnson said in a phone interview.

In the natural world, the demarcation between a woman’s shape and man’s shape comes when the ratio of the waist and hip circumferences is 0.8. But the volunteers, on average, placed the boundary at 0.68. In other words, an identifiable female shape for them was close to the idealised curves of a pinup.

Johnson’s team carried out three further studies, using a slightly different methods to see whether their approach had been skewed, and found that the bias in favour of men was unchanged. Are these errors in perception? Not so, said Johnson, who believes it to be an ancestral survival mechanism.

A man is likelier than a woman to be a bigger physical threat and our default perception is to prepare for risk: it’s better to be safe than sorry. “We suspect that this might be for a self-protective reason,” she said. “If you are walking down a dark alley at night, a woman poses no great physical threat to you in general, but if you encounter an unknown man, he’s more likely to have a physical formidability that could pose some risks.”

Johnson conceded that there could be cultural or ethnic factors which influence judgement but argued that the same kind of bias would prevail anywhere. “I think it’s entirely likely that if we were to test this in different populations we would probably have the same basic effect, the same pattern of judgement, although the strength of the judgement might vary,” she said.

The findings show how gender stereotypes can be reinforced, sometimes dangerously so, said the study. A woman could struggle if she has a body shape that is perceived as masculine and thus unattractive. “Consistent with other research, this is likely to produce preferences for extreme body shapes, particularly for women,” said the study.

The paper appears in the British journal Proceedings of the Royal Society B

(Source: medicalxpress.com)

Filed under perception bias survival mechanism gender stereotypes body shape neuroscience psychology science

20 notes

Opinion: Bias Is Unavoidable

By Lisa Cosgrove | August 7, 2012

It is part of the human condition to have implicit biases—and remain blissfully ignorant of them. Academic researchers, scientists, and clinicians are no exception; they are as marvelously flawed as everyone else. But it is not the cognitive bias that’s the problem. Rather, the denial that there is a problem is where the issues arise. Indeed, our capacity for self-deception was beautifully captured in the title of a recent book addressing researchers’ self-justificatory strategies, Mistakes Were Made (But Not by Me).

Illustration by Dusan Petricic

Decades of research have demonstrated that cognitive biases are commonplace and very difficult to eradicate, and more recent studies suggest that disclosure of financial conflicts of interest may actually worsen bias. This is because bias is most often manifested in subtle ways unbeknownst to the researcher or clinician, and thus is usually implicit and unintentional.  For example, although there was no research misconduct or fraud, re-evaluations of liver tissue of rats exposed to the drug dioxin resulted in different conclusions about the liver cancer in those rats: compared to the original investigation, an industry-sponsored re-evaluation identified fewer tissue slides as cancerous and this finding affected policy recommendations (water quality standards were weakened). (See also Brown, Cold Spring Harbor Laboratory Press, 13–28, 1991.) This example is just one of many that points to a genericrisk that a financial conflict of interest may compromise research or undermine public trust.

Indeed, recent neuroscience investigations demonstrate that effective decision-making involves not just cognitive centers but also emotional areas such as the hippocampus and amygdala. This interplay of cognitive-emotional processing allows conflicts of interest to affect decision-making in a way that is hidden from the person making the decision.

Despite these findings, many individuals are dismissive of the idea that researchers’ financial ties to industry are problematic. For example, in a recent essay in The Scientist, Thomas Stossel of Brigham & Women’s Hospital and Harvard Medical School asked, “How could unrestricted grants, ideal for research that follows up serendipitous findings, possibly be problematic? The money leads to better research that can benefit patients.” Many argue that subjectivity in the research process and the potential for bias can be eradicated by strict adherence to the scientific method and transparency about industry relationships. Together, scientists believe, these practices can guarantee evidence-based research that leads to the discovery and dissemination of “objective” scientific truths. The assumption is that the reporting of biased results is a “bad apple” problem—a few corrupt individuals engaging in research fraud. But what we have today is a bad barrel.

Some have begun to use the analytic framework of “institutional corruption” to bring attention to the fact that the trouble is not with a few corrupt individuals hurting an organization whose integrity is basically intact. Institutional corruption refers to the systemic and usually legal—and often accepted and widely defended—practices that bring an organization or institution off course, undermine its mission and effectiveness, and weaken public trust. Although the entire field of biomedicine has come under scrutiny because of concerns about an improper dependence on industry and all medical specialties have struggled with financial conflicts of interest, psychiatry has been particularly troubled, being described by some as having a crisis of credibility.

This credibility crisis has been played out most noticeably in the public controversy surrounding the latest revision to the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is often referred to as the “Bible” of mental disorders, and is produced by the American Psychiatric Association (APA), a professional organization with a long history of industry ties. DSM-5, the revised edition scheduled for publication in May, 2013, has already been criticized for “disease mongering,” or pathologizing normal behavior. Concerns have been raised that because the individuals responsible for making changes and adding new disorders have strong and long-standing financial associations to pharmaceutical companies that manufacture the drugs used to treat these disorders, the revision process may be compromised by undue industry influence.

Researchers, clinicians, and psychiatrists who served on the DSM-IV have pointed out that adding new disorders or lowering the diagnostic threshold of previously included disorders may create “false positives,” individuals incorrectly identified as having a mental disorder and prescribed psychotropic medication.  For example, there was a heated debate about pathologizing the normal grieving process if DSM-5 eliminated the bereavement exclusion for major depressive disorder (MDD).  The concern was that widening the diagnostic boundaries of depression to include grief as a “qualifying event,” thereby allowing for a diagnosis of MDD just 2 weeks after the loss of a loved one, would falsely identify individuals as depressed. Although it is not the APA’s intent to play handmaiden to industry, the reality is that such a change would result in more people being prescribed antidepressants following the loss of a loved one. In fact, psychiatrist Allen Frances, who chaired the DSM-IV task force, has noted that DSM-5 would be a “bonanza” for drug companies.

After receiving criticism about potential bias in the development of the DSM-IV, the APA required that DSM-5 panel members file financial disclosures. Additionally, during their tenure on the panels they were not allowed to receive more than $10,000 from pharmaceutical companies or have more than $50,000 in stock holdings in pharmaceutical companies (unrestricted research grants were excluded from this policy). The majority of diagnostic panels, however, continue to have the majority of their members with financial ties to the pharmaceutical industry. Specifically, 67 percent of the 12-person panel for mood disorders, 83 percent of the 12-person panel for psychotic disorders, and all 7 members of the sleep/wake disorders panel (which now includes ‘‘Restless Leg Syndrome’’) have ties to the pharmaceutical companies that manufacture the medications used to treat these disorders or to companies that service the pharmaceutical industry.

Clearly, the new disclosure policy has not been accompanied by any reduction in the financial conflicts of interest of DSM panel members. Moreover, Darrel Regier, speaking on behalf of the APA and in defense of DSM panel members with industry ties, told USA Today. “There’s this assumption that a tie with a company is evidence of bias. But these people can be objective.”  However, as science has repeatedly shown, transparency alone cannot mitigate bias and is an insufficient solution for protecting the integrity of the revision process. Objectivity is not a product that can be easily secured by adherence to the scientific method. Rather, there is a generic risk that a conflict of interest may result in implicit, unintentional bias. Similarly, as Sinclair Lewis said, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

Source: TheScientist

Filed under academia bias neuroscience psychology research science decision making

free counters