Neuroscience

Articles and news from the latest research reports.

Posts tagged decision making

163 notes

Researchers identify decision-making center of brain
Although choosing to do something because the perceived benefit outweighs the financial cost is something people do daily, little is known about what happens in the brain when a person makes these kinds of decisions. Studying how these cost-benefit decisions are made when choosing to consume alcohol, University of Georgia associate professor of psychology James MacKillop identified distinct profiles of brain activity that are present when making these decisions.
"We were interested in understanding how the brain makes decisions about drinking alcohol. Particularly, we wanted to clarify how the brain weighs the pros and cons of drinking," said MacKillop, who directs the Experimental and Clinical Psychopharmacology Laboratory in the UGA Franklin College of Arts and Sciences.
The study combined functional magnetic resonance imaging and a bar laboratory alcohol procedure to see how the cost of alcohol affected people’s preferences. The study group included 24 men, age 21-31, who were heavy drinkers. Participants were given a $15 bar tab and then were asked to make decisions in the fMRI scanner about how many drinks they would choose at varying prices, from very low to very high. Their choices translated into real drinks, at most eight that they received in the bar immediately after the scan. Any money not spent on drinks was theirs to keep.
The study applied a neuroeconomic approach, which integrates concepts and methods from psychology, economics and cognitive neuroscience to understand how the brain makes decisions. In this study, participants’ cost-benefit decisions were categorized into those in which drinking was perceived to have all benefit and no cost, to have both benefits and costs, and to have all costs and no benefits. In doing so, MacKillop could dissect the neural mechanisms responsible for different types of cost-benefit decision-making.
"We tried to span several levels of analysis, to think about clinical questions, like why do people choose to drink or not drink alcohol, and then unpack those choices into the underlying units of the brain that are involved," he said.
When participants decided to drink in general, activation was seen in several areas of the cerebral cortex, such as the prefrontal and parietal cortices. However, when the decision to drink was affected by the cost of alcohol, activation involved frontostriatal regions, which are important for the interplay between deliberation and reward value, suggesting suppression resulting from greater cognitive load. This is the first study of its kind to examine cost-benefit decision-making for alcohol and was the first to apply a framework from economics, called demand curve analysis, to understanding cost-benefit decision making.
"The brain activity was most differentially active during the suppressed consumption choices, suggesting that participants were experiencing the most conflict," MacKillop said. "We had speculated during the design of the study that the choices not to drink at all might require the most cognitive effort, but that didn’t seem to be the case. Once people decided that the cost of drinking was too high, they didn’t appear to experience a great deal of conflict in terms of the associated brain activity."
These conflicted decisions appeared to be represented by activity in the anterior insula, which has been linked in previous addiction studies to the motivational circuitry of the brain. Not only encoding how much people crave or value drugs, this portion of the brain is believed to be responsible for processing interceptive experiences, a person’s visceral physiological responses.
"It was interesting that the insula was sensitive to escalating alcohol costs especially when the costs of drinking outweighed the benefits," MacKillop said. "That means this could be the region of the brain at the intersection of how our rational and irrational systems work with one another. In general, we saw the choices associated with differential brain activity were those choices in the middle, where people were making choices that reflect the ambivalence between cost and benefits. Where we saw that tension, we saw the most brain activity."
While MacKillop acknowledges the impact this research could have on neuromarketing-or understanding how the brain makes decisions about what to buy-he is more interested in how this research can help people with alcohol addictions.
"These findings reveal the distinct neural signatures associated with different kinds of consumption preferences. Now that we have established a way of studying these choices, we can apply this approach to better understanding substance use disorders and improving treatment," he said, adding that comparing fMRI scans from alcoholics with those of people with normal drinking habits could potentially tease out brain patterns that show what is different between healthy and unhealthy drinkers. "In the past, we have found that behavioral indices of alcohol value predict poor treatment prognosis, but this would permit us to understand the neural basis for negative outcomes."
The research was published in the journal Neuropsychopharmacology March 3. A podcast highlighting this work is available at http://www.nature.com/multimedia/podcast/npp/npp_030314_alcohol.mp3.

Researchers identify decision-making center of brain

Although choosing to do something because the perceived benefit outweighs the financial cost is something people do daily, little is known about what happens in the brain when a person makes these kinds of decisions. Studying how these cost-benefit decisions are made when choosing to consume alcohol, University of Georgia associate professor of psychology James MacKillop identified distinct profiles of brain activity that are present when making these decisions.

"We were interested in understanding how the brain makes decisions about drinking alcohol. Particularly, we wanted to clarify how the brain weighs the pros and cons of drinking," said MacKillop, who directs the Experimental and Clinical Psychopharmacology Laboratory in the UGA Franklin College of Arts and Sciences.

The study combined functional magnetic resonance imaging and a bar laboratory alcohol procedure to see how the cost of alcohol affected people’s preferences. The study group included 24 men, age 21-31, who were heavy drinkers. Participants were given a $15 bar tab and then were asked to make decisions in the fMRI scanner about how many drinks they would choose at varying prices, from very low to very high. Their choices translated into real drinks, at most eight that they received in the bar immediately after the scan. Any money not spent on drinks was theirs to keep.

The study applied a neuroeconomic approach, which integrates concepts and methods from psychology, economics and cognitive neuroscience to understand how the brain makes decisions. In this study, participants’ cost-benefit decisions were categorized into those in which drinking was perceived to have all benefit and no cost, to have both benefits and costs, and to have all costs and no benefits. In doing so, MacKillop could dissect the neural mechanisms responsible for different types of cost-benefit decision-making.

"We tried to span several levels of analysis, to think about clinical questions, like why do people choose to drink or not drink alcohol, and then unpack those choices into the underlying units of the brain that are involved," he said.

When participants decided to drink in general, activation was seen in several areas of the cerebral cortex, such as the prefrontal and parietal cortices. However, when the decision to drink was affected by the cost of alcohol, activation involved frontostriatal regions, which are important for the interplay between deliberation and reward value, suggesting suppression resulting from greater cognitive load. This is the first study of its kind to examine cost-benefit decision-making for alcohol and was the first to apply a framework from economics, called demand curve analysis, to understanding cost-benefit decision making.

"The brain activity was most differentially active during the suppressed consumption choices, suggesting that participants were experiencing the most conflict," MacKillop said. "We had speculated during the design of the study that the choices not to drink at all might require the most cognitive effort, but that didn’t seem to be the case. Once people decided that the cost of drinking was too high, they didn’t appear to experience a great deal of conflict in terms of the associated brain activity."

These conflicted decisions appeared to be represented by activity in the anterior insula, which has been linked in previous addiction studies to the motivational circuitry of the brain. Not only encoding how much people crave or value drugs, this portion of the brain is believed to be responsible for processing interceptive experiences, a person’s visceral physiological responses.

"It was interesting that the insula was sensitive to escalating alcohol costs especially when the costs of drinking outweighed the benefits," MacKillop said. "That means this could be the region of the brain at the intersection of how our rational and irrational systems work with one another. In general, we saw the choices associated with differential brain activity were those choices in the middle, where people were making choices that reflect the ambivalence between cost and benefits. Where we saw that tension, we saw the most brain activity."

While MacKillop acknowledges the impact this research could have on neuromarketing-or understanding how the brain makes decisions about what to buy-he is more interested in how this research can help people with alcohol addictions.

"These findings reveal the distinct neural signatures associated with different kinds of consumption preferences. Now that we have established a way of studying these choices, we can apply this approach to better understanding substance use disorders and improving treatment," he said, adding that comparing fMRI scans from alcoholics with those of people with normal drinking habits could potentially tease out brain patterns that show what is different between healthy and unhealthy drinkers. "In the past, we have found that behavioral indices of alcohol value predict poor treatment prognosis, but this would permit us to understand the neural basis for negative outcomes."

The research was published in the journal Neuropsychopharmacology March 3. A podcast highlighting this work is available at http://www.nature.com/multimedia/podcast/npp/npp_030314_alcohol.mp3.

Filed under decision making brain activity alcohol addiction neuroimaging neuroscience science

137 notes

Ever-So-Slight Delay Improves Decision-Making Accuracy
Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.
“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.
The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”
In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”
The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.
Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.
The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.
The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.
“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”
The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.
Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”
“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Ever-So-Slight Delay Improves Decision-Making Accuracy

Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.

“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.

The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”

In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”

The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.

Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.

The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.

The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.

“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”

The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.

Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”

“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Filed under decision making attention cognition psychology neuroscience science

147 notes

Study uncovers surprising differences in brain activity of alcohol-dependent women

A new Indiana University study that examines the brain activity of alcohol-dependent women compared to women who were not addicted found stark and surprising differences, leading to intriguing questions about brain network functions of addicted women as they make risky decisions about when and what to drink.

image

The study used functional magnetic resonance imaging, or fMRI, to study differences between patterns of brain network activation in the two groups of women. The findings indicate that the anterior insular region of the brain may be implicated in the process, suggesting a possible new target of treatment for alcohol-dependent women.

"We see that the network dynamics of alcohol-dependent women may be really different from that of healthy controls in a drinking-related task," said Lindsay Arcurio, a graduate student in the Department of Psychological and Brain Sciences. "We have evidence to suggest alcohol-dependent women have trouble switching between networks of the brain."

The research is part of a larger new effort to understand the differences between men and women with respect to alcohol. Arcurio said most of the research on alcohol dependence has been conducted with men or groups of men and women. Yet several factors make looking at women “really important.”

One such factor is that the physiological effects of drinking alcohol, which include liver damage, heart disease or breast cancer, set in much earlier in women than in men. For this reason, the suggested limit on the number of drinks per week that women can safely consume is eight, whereas for men, it is 14. Secondly, binge-drinking in women is on the rise. One in five adolescent girls is binge-drinking three times a month. In women between the ages of 18 and 54, that number is one in eight.

A ‘sledgehammer’ approach to defining differences in brain network activation

Research on decision-making mechanisms in alcohol-dependent individuals typically involves a general risk-taking situation in which money or points are at stake. In this study, participants were placed in the fMRI brain scanner and asked to consider low-risk and high-risk situations specifically related to alcohol — what the researchers describe as “ecological” tasks. Participants were then asked to make decisions regarding control stimuli — food as well as a presumably neutral stimuli, a stapler — to observe whether risky behavior was greater with respect to drinking than with these other items. The same picture cues were used to present high-risk and low-risk scenarios, and these two extremes were as follows:

For the low-risk situation, participants were told: Imagine you are at a bar. You are offered a drink, already paid for, with two shots of alcohol, and you have a safe ride home. For the high-risk, they were told: You are at a bar and are offered a drink already paid for, with six shots of alcohol, but you do not have a safe ride home.

The reason for such an extreme contrast between the two situations, Arcurio said, is that “as one of the first ecological tasks used in the scanner, we wanted to take a sledgehammer approach to really find the differences between cases that are definitely high-risk and those that are definitely low-risk.”

The findings, however, reflect an equally sharp contrast in differences between the brain network activation in alcohol-dependent women versus the controls.

For the control group, high-risk decisions to drink led to the deactivation of regions associated with “approach behavior,” deciding to take the drink in a risky situation. Conversely, women in the control group activate regions associated with the default mode network, a region traditionally thought to involve resting-state behavior or inactive or relaxed mental state, but which some now speculate plays a role in conceptualizing one’s future.

"It gets really interesting," Arcurio said, "comparing this pattern of activation to those in alcohol-dependent women, who behaviorally say they’re more likely to take the high-risk drink compared to the controls. They don’t deactivate anything. In contrast to the controls, alcohol-dependent women activate all three regions in question. They activate regions associated with reward (which release dopamine). They also activate frontal control regions involved in cognitive control and regions associated with the default mode network, involved in resting-state behavior. They are activating everything."

The investigators infer from these findings that alcohol-dependent women have trouble switching between networks. Being unable to activate one region and deactivate another in response to an alcohol-related situation means they are unable to use one strategy over another.

Furthermore, Arcurio said, “a lot of evidence suggests that switching between networks is influenced by the anterior insular and anterior cingulate regions of the brain, and we did find major differences in the insula between the alcohol-dependent women and controls. We’re thinking the issue is pinpointed to that region.”

The researchers are now running analyses to test the hypothesis that the insula helps in this process, which could offer new possibilities for intervention, with both behavioral therapy and medication.

The research is part of a whole research program, both planned and in the works, to further explore the questions about risky decision-making in alcohol-dependent women: studies of adolescent drinking, risky sexual behavior in alcohol-dependent women, the interaction of visual networks with decision-making networks, as well as the way music (or auditory networks) interacts with decision-making mechanisms in alcohol-dependent women. In the latter experiment, college-age participants choose a song that they associate with drinking and one with quiet reflection.

"There’s a lot of Miley Cyrus in the first category," Arcurio said.

(Source: news.indiana.edu)

Filed under alcohol dependence addiction brain activity neuroimaging dopamine decision making neuroscience science

243 notes

Researchers find brain’s ‘sweet spot’ for love in neurological patient
A region deep inside the brain controls how quickly people make decisions about love, according to new research at the University of Chicago.
The finding, made in an examination of a 48-year-old man who suffered a stroke, provides the first causal clinical evidence that an area of the brain called the anterior insula “plays an instrumental role in love,” said UChicago neuroscientist Stephanie Cacioppo, lead author of the study.
In an earlier paper that analyzed research on the topic, Cacioppo and colleagues defined love as “an intentional state for intense [and long-term] longing for union with another” while lust, or sexual desire, is characterized by an intentional state for a short-term, pleasurable goal.
In this study, the patient made decisions normally about lust but showed slower reaction times when making decisions about love, in contrast to neurologically typical participants matched on age, gender and ethnicity. The findings are presented in a paper, “Selective Decision-Making Deficit in Love Following Damage to the Anterior Insula,” published in the journal Current Trends in Neurology. 
“This distinction has been interpreted to mean that desire is a relatively concrete representation of sensory experiences, while love is a more abstract representation of those experiences,” said Cacioppo, a research associate and assistant professor in psychology. The new data suggest that the posterior insula, which affects sensation and motor control, is implicated in feelings of lust or desire, while the anterior insula has a role in the more abstract representations involved in love.
In the earlier paper, “The Common Neural Bases Between Sexual Desire and Love: A Multilevel Kernel Density fMRI Analysis,” Cacioppo and colleagues examined a number of studies of brain scans that looked at differences between love and lust.
The studies showed consistently that the anterior insula was associated with love, and the posterior insula was associated with lust. However, as in all fMRI studies, the findings were correlational.
“We reasoned that if the anterior insula was the origin of the love response, we would find evidence for that in brain scans of someone whose anterior insula was damaged,” she said. 
In the study, researchers examined a 48-year-old heterosexual male in Argentina, who had suffered a stroke that damaged the function of his anterior insula. He was matched with a control group of seven Argentinian heterosexual men of the same age who had healthy anterior insula.
The patient and the control group were shown 40 photographs at random of attractive, young women dressed in appealing, short and long dresses and asked whether these women were objects of sexual desire or love. The patient with the damaged anterior insula showed a much slower response when asked if the women in the photos could be objects of love.
“The current work makes it possible to disentangle love from other biological drives,” the authors wrote. Such studies also could help researchers examine feelings of love by studying neurological activity rather than subjective questionnaires.

Researchers find brain’s ‘sweet spot’ for love in neurological patient

A region deep inside the brain controls how quickly people make decisions about love, according to new research at the University of Chicago.

The finding, made in an examination of a 48-year-old man who suffered a stroke, provides the first causal clinical evidence that an area of the brain called the anterior insula “plays an instrumental role in love,” said UChicago neuroscientist Stephanie Cacioppo, lead author of the study.

In an earlier paper that analyzed research on the topic, Cacioppo and colleagues defined love as “an intentional state for intense [and long-term] longing for union with another” while lust, or sexual desire, is characterized by an intentional state for a short-term, pleasurable goal.

In this study, the patient made decisions normally about lust but showed slower reaction times when making decisions about love, in contrast to neurologically typical participants matched on age, gender and ethnicity. The findings are presented in a paper, “Selective Decision-Making Deficit in Love Following Damage to the Anterior Insula,” published in the journal Current Trends in Neurology.

“This distinction has been interpreted to mean that desire is a relatively concrete representation of sensory experiences, while love is a more abstract representation of those experiences,” said Cacioppo, a research associate and assistant professor in psychology. The new data suggest that the posterior insula, which affects sensation and motor control, is implicated in feelings of lust or desire, while the anterior insula has a role in the more abstract representations involved in love.

In the earlier paper, “The Common Neural Bases Between Sexual Desire and Love: A Multilevel Kernel Density fMRI Analysis,” Cacioppo and colleagues examined a number of studies of brain scans that looked at differences between love and lust.

The studies showed consistently that the anterior insula was associated with love, and the posterior insula was associated with lust. However, as in all fMRI studies, the findings were correlational.

“We reasoned that if the anterior insula was the origin of the love response, we would find evidence for that in brain scans of someone whose anterior insula was damaged,” she said. 

In the study, researchers examined a 48-year-old heterosexual male in Argentina, who had suffered a stroke that damaged the function of his anterior insula. He was matched with a control group of seven Argentinian heterosexual men of the same age who had healthy anterior insula.

The patient and the control group were shown 40 photographs at random of attractive, young women dressed in appealing, short and long dresses and asked whether these women were objects of sexual desire or love. The patient with the damaged anterior insula showed a much slower response when asked if the women in the photos could be objects of love.

“The current work makes it possible to disentangle love from other biological drives,” the authors wrote. Such studies also could help researchers examine feelings of love by studying neurological activity rather than subjective questionnaires.

Filed under decision making love anterior insula brain activity stroke neuroscience science

196 notes

Pinpointing the Brain’s Arbitrator
We tend to be creatures of habit. In fact, the human brain has a learning system that is devoted to guiding us through routine, or habitual, behaviors. At the same time, the brain has a separate goal-directed system for the actions we undertake only after careful consideration of the consequences. We switch between the two systems as needed. But how does the brain know which system to give control to at any given moment? Enter The Arbitrator.
Researchers at the California Institute of Technology (Caltech) have, for the first time, pinpointed areas of the brain—the inferior lateral prefrontal cortex and frontopolar cortex—that seem to serve as this “arbitrator” between the two decision-making systems, weighing the reliability of the predictions each makes and then allocating control accordingly. The results appear in the current issue of the journal Neuron.
According to John O’Doherty, the study’s principal investigator and director of the Caltech Brain Imaging Center, understanding where the arbitrator is located and how it works could eventually lead to better treatments for brain disorders, such as drug addiction, and psychiatric disorders, such as obsessive-compulsive disorder. These disorders, which involve repetitive behaviors, may be driven in part by malfunctions in the degree to which behavior is controlled by the habitual system versus the goal-directed system.
"Now that we have worked out where the arbitrator is located, if we can find a way of altering activity in this area, we might be able to push an individual back toward goal-directed control and away from habitual control," says O’Doherty, who is also a professor of psychology at Caltech. "We’re a long way from developing an actual treatment based on this for disorders that involve over-egging of the habit system, but this finding has opened up a highly promising avenue for further research."
In the study, participants played a decision-making game on a computer while connected to a functional magnetic resonance imaging (fMRI) scanner that monitored their brain activity. Participants were instructed to try to make optimal choices in order to gather coins of a certain color, which were redeemable for money.
During a pre-training period, the subjects familiarized themselves with the game—moving through a series of on-screen rooms, each of which held different numbers of red, yellow, or blue coins. During the actual game, the participants were told which coins would be redeemable each round and given a choice to navigate right or left at two stages, knowing that they would collect only the coins in their final room. Sometimes all of the coins were redeemable, making the task more habitual than goal-directed. By altering the probability of getting from one room to another, the researchers were able to further test the extent of participants’ habitual and goal-directed behavior while monitoring corresponding changes in their brain activity.
With the results from those tests in hand, the researchers were able to compare the fMRI data and choices made by the subjects against several computational models they constructed to account for behavior. The model that most accurately matched the experimental data involved the two brain systems making separate predictions about which action to take in a given situation. Receiving signals from those systems, the arbitrator kept track of the reliability of the predictions by measuring the difference between the predicted and actual outcomes for each system. It then used those reliability estimates to determine how much control each system should exert over the individual’s behavior. In this model, the arbitrator ensures that the system making the most reliable predictions at any moment exerts the greatest degree of control over behavior.
"What we’re showing is the existence of higher-level control in the human brain," says Sang Wan Lee, lead author of the new study and a postdoctoral scholar in neuroscience at Caltech. "The arbitrator is basically making decisions about decisions."
In line with previous findings from the O’Doherty lab and elsewhere, the researchers saw in the brain scans that an area known as the posterior putamen was active at times when the model predicted that the habitual system should be calculating prediction values. Going a step further, they examined the connectivity between the posterior putamen and the arbitrator. What they found might explain how the arbitrator sets the weight for the two learning systems: the connection between the arbitrator area and the posterior putamen changed according to whether the goal-directed or habitual system was deemed to be more reliable. However, no such connection effects were found between the arbitrator and brain regions involved in goal-directed learning. This suggests that the arbitrator may work mainly by modulating the activity of the habitual system.
"One intriguing possibility arising from these findings, which we will need to test in future work, is that being in a habitual mode of behavior may be the default state," says O’Doherty. "So when the arbitrator determines you need to be more goal-directed in your behavior, it accomplishes this by inhibiting the activity of the habitual system, almost like pressing the breaks on your car when you are in drive."

Pinpointing the Brain’s Arbitrator

We tend to be creatures of habit. In fact, the human brain has a learning system that is devoted to guiding us through routine, or habitual, behaviors. At the same time, the brain has a separate goal-directed system for the actions we undertake only after careful consideration of the consequences. We switch between the two systems as needed. But how does the brain know which system to give control to at any given moment? Enter The Arbitrator.

Researchers at the California Institute of Technology (Caltech) have, for the first time, pinpointed areas of the brain—the inferior lateral prefrontal cortex and frontopolar cortex—that seem to serve as this “arbitrator” between the two decision-making systems, weighing the reliability of the predictions each makes and then allocating control accordingly. The results appear in the current issue of the journal Neuron.

According to John O’Doherty, the study’s principal investigator and director of the Caltech Brain Imaging Center, understanding where the arbitrator is located and how it works could eventually lead to better treatments for brain disorders, such as drug addiction, and psychiatric disorders, such as obsessive-compulsive disorder. These disorders, which involve repetitive behaviors, may be driven in part by malfunctions in the degree to which behavior is controlled by the habitual system versus the goal-directed system.

"Now that we have worked out where the arbitrator is located, if we can find a way of altering activity in this area, we might be able to push an individual back toward goal-directed control and away from habitual control," says O’Doherty, who is also a professor of psychology at Caltech. "We’re a long way from developing an actual treatment based on this for disorders that involve over-egging of the habit system, but this finding has opened up a highly promising avenue for further research."

In the study, participants played a decision-making game on a computer while connected to a functional magnetic resonance imaging (fMRI) scanner that monitored their brain activity. Participants were instructed to try to make optimal choices in order to gather coins of a certain color, which were redeemable for money.

During a pre-training period, the subjects familiarized themselves with the game—moving through a series of on-screen rooms, each of which held different numbers of red, yellow, or blue coins. During the actual game, the participants were told which coins would be redeemable each round and given a choice to navigate right or left at two stages, knowing that they would collect only the coins in their final room. Sometimes all of the coins were redeemable, making the task more habitual than goal-directed. By altering the probability of getting from one room to another, the researchers were able to further test the extent of participants’ habitual and goal-directed behavior while monitoring corresponding changes in their brain activity.

With the results from those tests in hand, the researchers were able to compare the fMRI data and choices made by the subjects against several computational models they constructed to account for behavior. The model that most accurately matched the experimental data involved the two brain systems making separate predictions about which action to take in a given situation. Receiving signals from those systems, the arbitrator kept track of the reliability of the predictions by measuring the difference between the predicted and actual outcomes for each system. It then used those reliability estimates to determine how much control each system should exert over the individual’s behavior. In this model, the arbitrator ensures that the system making the most reliable predictions at any moment exerts the greatest degree of control over behavior.

"What we’re showing is the existence of higher-level control in the human brain," says Sang Wan Lee, lead author of the new study and a postdoctoral scholar in neuroscience at Caltech. "The arbitrator is basically making decisions about decisions."

In line with previous findings from the O’Doherty lab and elsewhere, the researchers saw in the brain scans that an area known as the posterior putamen was active at times when the model predicted that the habitual system should be calculating prediction values. Going a step further, they examined the connectivity between the posterior putamen and the arbitrator. What they found might explain how the arbitrator sets the weight for the two learning systems: the connection between the arbitrator area and the posterior putamen changed according to whether the goal-directed or habitual system was deemed to be more reliable. However, no such connection effects were found between the arbitrator and brain regions involved in goal-directed learning. This suggests that the arbitrator may work mainly by modulating the activity of the habitual system.

"One intriguing possibility arising from these findings, which we will need to test in future work, is that being in a habitual mode of behavior may be the default state," says O’Doherty. "So when the arbitrator determines you need to be more goal-directed in your behavior, it accomplishes this by inhibiting the activity of the habitual system, almost like pressing the breaks on your car when you are in drive."

Filed under decision making prefrontal cortex arbitrator brain activity habit system neuroscience science

239 notes

What makes us human? Unique brain area linked to higher cognitive powers
Oxford University researchers have identified an area of the human brain that appears unlike anything in the brains of some of our closest relatives.
The brain area pinpointed is known to be intimately involved in some of the most advanced planning and decision-making processes that we think of as being especially human.
'We tend to think that being able to plan into the future, be flexible in our approach and learn from others are things that are particularly impressive about humans. We've identified an area of the brain that appears to be uniquely human and is likely to have something to do with these cognitive powers,' says senior researcher Professor Matthew Rushworth of Oxford University's Department of Experimental Psychology.
MRI imaging of 25 adult volunteers was used to identify key components in the ventrolateral frontal cortex area of the human brain, and how these components were connected up with other brain areas. The results were then compared to equivalent MRI data from 25 macaque monkeys.
Read more

What makes us human? Unique brain area linked to higher cognitive powers

Oxford University researchers have identified an area of the human brain that appears unlike anything in the brains of some of our closest relatives.

The brain area pinpointed is known to be intimately involved in some of the most advanced planning and decision-making processes that we think of as being especially human.

'We tend to think that being able to plan into the future, be flexible in our approach and learn from others are things that are particularly impressive about humans. We've identified an area of the brain that appears to be uniquely human and is likely to have something to do with these cognitive powers,' says senior researcher Professor Matthew Rushworth of Oxford University's Department of Experimental Psychology.

MRI imaging of 25 adult volunteers was used to identify key components in the ventrolateral frontal cortex area of the human brain, and how these components were connected up with other brain areas. The results were then compared to equivalent MRI data from 25 macaque monkeys.

Read more

Filed under decision making neuroimaging primates prefrontal cortex cognition neuroscience science

183 notes

Fast eye movements: A possible indicator of more impulsive decision-making

Using a simple study of eye movements, Johns Hopkins scientists report evidence that people who are less patient tend to move their eyes with greater speed. The findings, the researchers say, suggest that the weight people give to the passage of time may be a trait consistently used throughout their brains, affecting the speed with which they make movements, as well as the way they make certain decisions.

image

Caption: Despite claims to the contrary, the eyes of the Mona Lisa do not make saccades. Credit: Leonardo da Vinci

In a summary of the research to be published Jan. 21 in The Journal of Neuroscience, the investigators note that a better understanding of how the human brain evaluates time when making decisions might also shed light on why malfunctions in certain areas of the brain make decision-making harder for those with neurological disorders like schizophrenia, or for those who have experienced brain injuries.

Principal investigator Reza Shadmehr, Ph.D., professor of biomedical engineering and neuroscience at The Johns Hopkins University, and his team set out to understand why some people are willing to wait and others aren’t. “When I go to the pharmacy and see a long line, how do I decide how long I’m willing to stand there?” he asks. “Are those who walk away and never enter the line also the ones who tend to talk fast and walk fast, perhaps because of the way they value time in relation to rewards?”

To address the question, the Shadmehr team used very simple eye movements, known as saccades, to stand in for other bodily movements. Saccades are the motions that our eyes make as we focus on one thing and then another. “They are probably the fastest movements of the body,” says Shadmehr. “They occur in just milliseconds.” Human saccades are fastest when we are teenagers and slow down as we age, he adds.

In earlier work, using a mathematical theory, Shadmehr and colleagues had shown that, in principle, the speed at which people move could be a reflection of the way the brain calculates the passage of time to reduce the value of a reward. In the current study, the team wanted to test the idea that differences in how subjects moved were a reflection of differences in how they evaluated time and reward.

For the study, the team first asked healthy volunteers to look at a screen upon which dots would appear one at a time –– first on one side of the screen, then on the other, then back again. A camera recorded their saccades as they looked from one dot to the other. The researchers found a lot of variability in saccade speed among individuals but very little variation within individuals, even when tested at different times and on different days. Shadmehr and his team concluded that saccade speed appears to be an attribute that varies from person to person. “Some people simply make fast saccades,” he says.

To determine whether saccade speed correlated with decision-making and impulsivity, the volunteers were told to watch the screen again. This time, they were given visual commands to look to the right or to the left. When they responded incorrectly, a buzzer sounded.

After becoming accustomed to that part of the test, they were forewarned that during the following round of testing, if they followed the command right away, they would be wrong 25 percent of the time. In those instances, after an undetermined amount of time, the first command would be replaced by a second command to look in the opposite direction.

To pinpoint exactly how long each volunteer was willing to wait to improve his or her accuracy on that phase of the test, the researchers modified the length of time between the two commands based on a volunteer’s previous decision. For example, if a volunteer chose to wait until the second command, the researchers increased the time they had to wait each consecutive time until they determined the maximum time the volunteer was willing to wait — only 1.5 seconds for the most patient volunteer. If a volunteer chose to act immediately, the researchers decreased the wait time to find the minimum time the volunteer was willing to wait to improve his or her accuracy.

When the speed of the volunteers’ saccades was compared to their impulsivity during the patience test, there was a strong correlation. “It seems that people who make quick movements, at least eye movements, tend to be less willing to wait,” says Shadmehr. “Our hypothesis is that there may be a fundamental link between the way the nervous system evaluates time and reward in controlling movements and in making decisions. After all, the decision to move is motivated by a desire to improve one’s situation, which is a strong motivating factor in more complex decision-making, too.”

(Source: eurekalert.org)

Filed under eye movements saccades decision making patience psychology neuroscience science

102 notes

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Filed under decision making predictions brain activity learning prefrontal cortex neuroscience science

538 notes

Why Do Our Brains Sometime Mess Up Simple Calculations?

If the human brain is comparable to a computer, why does it so often make mistakes that its electronic counterpart does not? New research suggests it all has to do with how various problems are presented.
Scientists typically like to make this comparison because both the human brain and a computer typically follow a set of rules in which to make decisions, communicate and perform other tasks. However, University of Wisconsin-Madison cognitive scientist and psychology professor Gary Lupyan said people can get tripped up on even the simplest logic problems because they get caught up in contextual information.
For example, even a simple challenge like determining whether or not a number is odd or even can be tricky, under the right circumstances. Lupyan said that there is a significant minority of people, even if they are well-educated, that can mistake a number such as 798 for an odd number – because, even though deep down we know that only the last number is used to determine whether it is even or odd, we can be fooled by the presence of two odd numbers.
“Most of us would attribute an error like that to carelessness, or not paying attention, but some errors may appear more often because our brains are not as well equipped to solve purely rule-based problems,” the professor, whose work appears in a recent edition of the journal Cognition, explained in a statement Friday.
In multiple trials involving such tasks as sorting numbers, shapes and even people into easy categories like evens, triangles and grandmothers, Lupyan found study participants often broke simple rules based on context.
For instance, when asked to consider a contest that was only open to grandmothers and that each eligible individual had an equal chance of winning, the subjects believed a 68-year-old woman with six grandchildren was more likely to emerge victorious than a 39-year-old female with one single, newborn grandchild.
“Even though people can articulate the rules, they can’t help but be influenced by perceptual details,” he explained. “Thinking of triangles tends to involve thinking of typical, equilateral sorts of triangles. It is difficult to focus on just the rules that make a shape a triangle, regardless of what it looks like exactly.”
Lupyan said that in many cases, not only is overlooking these types of rules overly detrimental, but doing so can actually be beneficial when it comes to evaluating unfamiliar things. The lone exception, he said, is when it comes to mathematics, where rules are unequivocally necessary in order to achieve a successful outcome.
“After all, although some people may mistakenly think that 798 is an odd number, not only can people follow such rules – though not always perfectly – we are capable of building computers that can execute such rules perfectly,” Lupyan said. “That itself required very precise, mathematical cognition. A big question is where this ability comes from and why some people are better at formal rules than other people.”
He added this issue could be especially important to math and science teachers: “Students approach learning with biases shaped both by evolution and day-to-day experience. Rather than treating errors as reflecting lack of knowledge or as inattention, trying to understand their source may lead to new ways of teaching rule-based systems while making use of the flexibility and creative problem solving at which humans excel.”

Why Do Our Brains Sometime Mess Up Simple Calculations?

If the human brain is comparable to a computer, why does it so often make mistakes that its electronic counterpart does not? New research suggests it all has to do with how various problems are presented.

Scientists typically like to make this comparison because both the human brain and a computer typically follow a set of rules in which to make decisions, communicate and perform other tasks. However, University of Wisconsin-Madison cognitive scientist and psychology professor Gary Lupyan said people can get tripped up on even the simplest logic problems because they get caught up in contextual information.

For example, even a simple challenge like determining whether or not a number is odd or even can be tricky, under the right circumstances. Lupyan said that there is a significant minority of people, even if they are well-educated, that can mistake a number such as 798 for an odd number – because, even though deep down we know that only the last number is used to determine whether it is even or odd, we can be fooled by the presence of two odd numbers.

“Most of us would attribute an error like that to carelessness, or not paying attention, but some errors may appear more often because our brains are not as well equipped to solve purely rule-based problems,” the professor, whose work appears in a recent edition of the journal Cognition, explained in a statement Friday.

In multiple trials involving such tasks as sorting numbers, shapes and even people into easy categories like evens, triangles and grandmothers, Lupyan found study participants often broke simple rules based on context.

For instance, when asked to consider a contest that was only open to grandmothers and that each eligible individual had an equal chance of winning, the subjects believed a 68-year-old woman with six grandchildren was more likely to emerge victorious than a 39-year-old female with one single, newborn grandchild.

“Even though people can articulate the rules, they can’t help but be influenced by perceptual details,” he explained. “Thinking of triangles tends to involve thinking of typical, equilateral sorts of triangles. It is difficult to focus on just the rules that make a shape a triangle, regardless of what it looks like exactly.”

Lupyan said that in many cases, not only is overlooking these types of rules overly detrimental, but doing so can actually be beneficial when it comes to evaluating unfamiliar things. The lone exception, he said, is when it comes to mathematics, where rules are unequivocally necessary in order to achieve a successful outcome.

“After all, although some people may mistakenly think that 798 is an odd number, not only can people follow such rules – though not always perfectly – we are capable of building computers that can execute such rules perfectly,” Lupyan said. “That itself required very precise, mathematical cognition. A big question is where this ability comes from and why some people are better at formal rules than other people.”

He added this issue could be especially important to math and science teachers: “Students approach learning with biases shaped both by evolution and day-to-day experience. Rather than treating errors as reflecting lack of knowledge or as inattention, trying to understand their source may lead to new ways of teaching rule-based systems while making use of the flexibility and creative problem solving at which humans excel.”

Filed under decision making perception mental representations human algorithms neuroscience science

144 notes

Heads or tails? Random fluctuations in brain cell activity may determine toss-up decisions

Life presents us with choices all the time: salad or pizza for lunch? Tea or coffee afterward? How we make these everyday decisions has been a topic of great interest to economists, who have devised theories about how we assign values to our options and use those values to make decisions.

image

An emerging field of study known as neuroeconomics is combining the economists’ insights with scientific study of the brain to learn more about decision-making processes and how they can go awry. In the Dec. 8 issue of Neuron, one of the field’s founders reports new links between brain cell activity and choices where two options have equal appeal.

“Neuroeconomics is not only helpful for the development of better economic theory, it is also relevant from a clinical point of view,” said author Camillo Padoa-Schioppa, PhD, assistant professor of neurobiology, economics and of biomedical engineering at Washington University School of Medicine in St. Louis. “There are a number of conditions that involve impaired economic decision-making, including drug addiction, brain injury, some forms of dementia, schizophrenia and obsessive-compulsive disorder.”

Scientists know that the orbitofrontal cortex, a region of the brain behind and above the eyes, plays a key role in making decisions. Patients with injuries to this part of the brain are often spectacularly bad at making decisions. They may do things like abandon longstanding relationships, gamble away money or lose it to swindlers, or become addicted to drugs.

To study the roles brain cells play in decision-making, Padoa-Schioppa developed a system for presenting primates a choice between two drinks, such as grape juice or apple juice. The type and amount of the drink varies, and researchers record the activity of individual brain neurons as the primates choose.

Based on the decisions of a single animal over multiple trials, scientists infer the subjective value the animal assigns to each drink and then look for ways this value is encoded in brain cells.

“For example, if we offer a larger amount of apple juice versus a smaller amount of grape juice, and the primate chooses each option equally often, we infer that this primate likes the grape juice better than the apple juice,” he explained. “The primate could be getting more juice by choosing the cup with apple juice, but it doesn’t always do so. That implies that the primate values grape juice more than apple juice.”

In 2006, Padoa-Schioppa and Harvard colleague John Assad, PhD, won international attention for using this system to identify brain cells whose firing rates encoded the subjective value of drink choices.

In a new analysis of data from the original experiment, Padoa-Schioppa showed that different groups of cells in the orbitofrontal cortex reflect different stages of the decision-making process.

“Some neurons encode the value of individual drinks; other neurons encode the choice outcome in a binary way ‒ these cells are either firing or silent depending on the chosen drink,” he explained. “Yet other neurons encode the value of the chosen option.”

Padoa-Schioppa then examined how different groups of cells determine decisions between options of equal value. He showed that toss-up decisions seemed to depend on changes in the initial state of the network of neurons in the orbitofrontal cortex.

“The fluctuations in the network took place before the primates were even offered a choice of juices, but they seem to somehow bias the decision,” Padoa-Schioppa said. “Neuronal signals are always noisy. In essence, close-call decisions are partly determined by random noise.”

He also found that decisions on choices of equal value were linked to the ease or difficulty with which nerve cells in parts of the orbitofrontal cortex communicate with each other. This property, known as synaptic efficacy, can be adjusted by the brain as part of the process of encoding information.

According to Padoa-Schioppa, these results provide new insights into the neuronal circuits that underlie economic decisions. He and his colleagues are using them to create a computational model of decision-making.

“The next step is to test that model,” Padoa-Schioppa said. “For example, we would like to bias decisions by artificially manipulating the activity of specific groups of cells.”

(Source: news.wustl.edu)

Filed under decision making orbitofrontal cortex neural activity neurons neuroscience science

free counters