Neuroscience

Articles and news from the latest research reports.

Posts tagged science

140 notes

Insect diet helped early humans build bigger brains
Figuring out how to survive on a lean-season diet of hard-to-reach ants, slugs and other bugs may have spurred the development of bigger brains and higher-level cognitive functions in the ancestors of humans and other primates, suggests research from Washington University in St. Louis.
“Challenges associated with finding food have long been recognized as important in shaping evolution of the brain and cognition in primates, including humans,” said Amanda D. Melin, PhD, assistant professor of anthropology in Arts & Sciences and lead author of the study.
“Our work suggests that digging for insects when food was scarce may have contributed to hominid cognitive evolution and set the stage for advanced tool use.”
Based on a five-year study of capuchin monkeys in Costa Rica, the research provides support for an evolutionary theory that links the development of sensorimotor (SMI) skills, such as increased manual dexterity, tool use, and innovative problem solving, to the creative challenges of foraging for insects and other foods that are buried, embedded or otherwise hard to procure.
Published in the June 2014 Journal of Human Evolution, the study is the first to provide detailed evidence from the field on how seasonal changes in food supplies influence the foraging patterns of wild capuchin monkeys.
The study is co-authored by biologist Hilary C. Young and anthropologists Krisztina N. Mosdossy and Linda M. Fedigan, all from the University of Calgary, Canada.
It notes that many human populations also eat embedded insects on a seasonal basis and suggests that this practice played a key role in human evolution.
“We find that capuchin monkeys eat embedded insects year-round but intensify their feeding seasonally, during the time that their preferred food – ripe fruit – is less abundant,” Melin said. “These results suggest embedded insects are an important fallback food.”
Previous research has shown that fallback foods help shape the evolution of primate body forms, including the development of strong jaws, thick teeth and specialized digestive systems in primates whose fallback diets rely mainly on vegetation.
This study suggests that fallback foods can also play an important role in shaping brain evolution among primates that fall back on insect-based diets, and that this influence is most pronounced among primates that evolve in habitats with wide seasonal variations, such as the wet-dry cycles found in some South American forests.
“Capuchin monkeys are excellent models for examining evolution of brain size and intelligence for their small body size, they have impressively large brains,” Melin said. “Accessing hidden and well-protected insects living in tree branches and under bark is a cognitively demanding task, but provides a high-quality reward: fat and protein, which is needed to fuel big brains.”
But when it comes to using tools, not all capuchin monkey strains and lineages are created equal, and Melin’s theories may explain why.
Perhaps the most notable difference between the robust (tufted, genus Sapajus) and gracile (untufted, genus Cebus) capuchin lineages is their variation in tool use. While Cebus monkeys are known for clever food-foraging tricks, such as banging snails or fruits against branches, they can’t hold a stick to their Sapajus cousins when it comes to theinnovative use and modification of sophisticated tools.
One explanation, Melin said, is that Cebus capuchins have historically and consistently occupied tropical rainforests, whereas the Sapajus lineage spread from their origins in the Atlantic rainforest into drier, more temperate and seasonal habitat types.
“Primates who extract foods in the most seasonal environments are expected to experience the strongest selection in the ‘sensorimotor intelligence’ domain, which includes cognition related to object handling,” Melin said. “This may explain the occurrence of tool use in some capuchin lineages, but not in others.”
Genetic analysis of mitochondial chromosomes suggests that the Sapajus-Cebus diversification occurred millions of years ago in the late Miocene epoch.
“We predict that the last common ancestor of Cebus and Sapajus had a level of SMI more closely resembling extant Cebus monkeys, and that further expansion of SMI evolved in the robust lineage to facilitate increased access to varied embedded fallback foods,necessitated by more intense periods of fruit shortage,” she said.
One of the more compelling modern examples of this behavior, said Melin, is the seasonal consumption of termites by chimpanzees, whose use of tools to extract this protein-rich food source is an important survival technique in harsh environments.
What does this all mean for hominids?
While it’s hard to decipher the extent of seasonal dietary variations from the fossil record, stable isotope analyses indicate seasonal variation in diet for at least one South African hominin, Paranthropus robustus. Other isotopic research suggests that early human diets may have included a range of extractable foods, such as termites, plant roots and tubers.
Modern humans frequently consume insects, which are seasonally important when other animal foods are limited.
This study suggests that the ingenuity required to survive on a diet of elusive insects has been a key factor in the development of uniquely human skills:
It may well have been bugs that helped build our brains.

Insect diet helped early humans build bigger brains

Figuring out how to survive on a lean-season diet of hard-to-reach ants, slugs and other bugs may have spurred the development of bigger brains and higher-level cognitive functions in the ancestors of humans and other primates, suggests research from Washington University in St. Louis.

“Challenges associated with finding food have long been recognized as important in shaping evolution of the brain and cognition in primates, including humans,” said Amanda D. Melin, PhD, assistant professor of anthropology in Arts & Sciences and lead author of the study.

“Our work suggests that digging for insects when food was scarce may have contributed to hominid cognitive evolution and set the stage for advanced tool use.”

Based on a five-year study of capuchin monkeys in Costa Rica, the research provides support for an evolutionary theory that links the development of sensorimotor (SMI) skills, such as increased manual dexterity, tool use, and innovative problem solving, to the creative challenges of foraging for insects and other foods that are buried, embedded or otherwise hard to procure.

Published in the June 2014 Journal of Human Evolution, the study is the first to provide detailed evidence from the field on how seasonal changes in food supplies influence the foraging patterns of wild capuchin monkeys.

The study is co-authored by biologist Hilary C. Young and anthropologists Krisztina N. Mosdossy and Linda M. Fedigan, all from the University of Calgary, Canada.

It notes that many human populations also eat embedded insects on a seasonal basis and suggests that this practice played a key role in human evolution.

“We find that capuchin monkeys eat embedded insects year-round but intensify their feeding seasonally, during the time that their preferred food – ripe fruit – is less abundant,” Melin said. “These results suggest embedded insects are an important fallback food.”

Previous research has shown that fallback foods help shape the evolution of primate body forms, including the development of strong jaws, thick teeth and specialized digestive systems in primates whose fallback diets rely mainly on vegetation.

This study suggests that fallback foods can also play an important role in shaping brain evolution among primates that fall back on insect-based diets, and that this influence is most pronounced among primates that evolve in habitats with wide seasonal variations, such as the wet-dry cycles found in some South American forests.

“Capuchin monkeys are excellent models for examining evolution of brain size and intelligence for their small body size, they have impressively large brains,” Melin said. “Accessing hidden and well-protected insects living in tree branches and under bark is a cognitively demanding task, but provides a high-quality reward: fat and protein, which is needed to fuel big brains.”

But when it comes to using tools, not all capuchin monkey strains and lineages are created equal, and Melin’s theories may explain why.

Perhaps the most notable difference between the robust (tufted, genus Sapajus) and gracile (untufted, genus Cebus) capuchin lineages is their variation in tool use. While Cebus monkeys are known for clever food-foraging tricks, such as banging snails or fruits against branches, they can’t hold a stick to their Sapajus cousins when it comes to the
innovative use and modification of sophisticated tools.

One explanation, Melin said, is that Cebus capuchins have historically and consistently occupied tropical rainforests, whereas the Sapajus lineage spread from their origins in the Atlantic rainforest into drier, more temperate and seasonal habitat types.

“Primates who extract foods in the most seasonal environments are expected to experience the strongest selection in the ‘sensorimotor intelligence’ domain, which includes cognition related to object handling,” Melin said. “This may explain the occurrence of tool use in some capuchin lineages, but not in others.”

Genetic analysis of mitochondial chromosomes suggests that the Sapajus-Cebus diversification occurred millions of years ago in the late Miocene epoch.

“We predict that the last common ancestor of Cebus and Sapajus had a level of SMI more closely resembling extant Cebus monkeys, and that further expansion of SMI evolved in the robust lineage to facilitate increased access to varied embedded fallback foods,
necessitated by more intense periods of fruit shortage,” she said.

One of the more compelling modern examples of this behavior, said Melin, is the seasonal consumption of termites by chimpanzees, whose use of tools to extract this protein-rich food source is an important survival technique in harsh environments.

What does this all mean for hominids?

While it’s hard to decipher the extent of seasonal dietary variations from the fossil record, stable isotope analyses indicate seasonal variation in diet for at least one South African hominin, Paranthropus robustus. Other isotopic research suggests that early human diets may have included a range of extractable foods, such as termites, plant roots and tubers.

Modern humans frequently consume insects, which are seasonally important when other animal foods are limited.

This study suggests that the ingenuity required to survive on a diet of elusive insects has been a key factor in the development of uniquely human skills:

It may well have been bugs that helped build our brains.

Filed under primates sensorimotor intelligence evolution tool use problem solving neuroscience science

134 notes

The Biology of Addiction Risk Looks Like Addiction

Research suggests that people at increased risk for developing addiction share many of the same neurobiological signatures of people who have already developed addiction. This similarity is to be expected, as individuals with family members who have struggled with addiction are over-represented in the population of addicted people.

However, a generation of animal research supports the hypothesis that the addiction process changes the brain in ways that converge with the distinctive neurobiology of the heritable risk for addiction. In other words, the more one uses addictive substances, the more one’s brain acquires the profile of someone who has inherited a risk for addiction.

One such change is a reduction in striatal dopamine release. Dopamine is a key brain chemical messenger involved in reward-related behaviors. Disturbances in dopamine signaling appear to contribute to reward processing that biases people to seek drug-like rewards and to develop drug-taking habits.

In the current issue of Biological Psychiatry, researchers at McGill University report that individuals at high risk for addiction show the same reduced dopamine response often observed in addicted individuals, identifying a new link between addiction risk and addiction in humans.

Dr. Marco Leyton and his colleagues recruited young adults, aged 18 to 25, who were classified into three groups: 1) a high-risk group of occasional stimulant users with an extensive family history of substance abuse; 2) a comparison group of occasional stimulant users with no family history; and 3) a second comparison group of individuals with no history of stimulant use and no known risk factors for addiction. Volunteers underwent a positron emission tomography (PET) scan involving the administration of amphetamine, which enabled the researchers to measure their dopamine response.

The authors found that the high-risk group of non-dependent young adults with extensive family histories of addiction displayed markedly reduced dopamine responses in comparison with both stimulant-naïve subjects and non-dependent users with no family history.

“This interesting new parallel between addiction risk and addiction may help to focus our attention on reward-related processes that contribute to the development of addiction, perhaps informing prevention strategies,” said Dr. John Krystal, Editor of Biological Psychiatry.

Leyton, a Professor at McGill University, said, “Young adults at risk of addictions have a strikingly disturbed brain dopamine reward system response when they are administered amphetamine. Past drug use seemed to aggravate the dopamine response also but this was not a sufficient explanation. Instead, the disturbance may be a heritable biological marker that could identify those at highest risk.”

This finding suggests that there are common brain mechanisms that promote the use of addictive substances in vulnerable people and in people who have long-standing habitual substance use.

Better understanding this biology may help to advance our understanding of how people develop addiction problems, as well as providing hints related to biological mechanisms that might be targeted for prevention and treatment.

(Source: elsevier.com)

Filed under addiction reward system dopamine neuroscience science

352 notes

Addiction starts with an overcorrection in the brain
The National Institutes of Health has turned to neuroscientists at the nation’s most “Stone Cold Sober” university for help finding ways to treat drug and alcohol addiction.
Brigham Young University professor Scott Steffensen and his collaborators have published three new scientific papers that detail the brain mechanisms involved with addictive substances. And the NIH thinks Steffensen’s on the right track, as evidenced by a $2-million grant that will help fund projects in his BYU lab for the next five years.
“Addiction is a brain disease that could be treated like any other disease,” Steffensen said. “I wouldn’t be as motivated to do this research, or as passionate about the work, if I didn’t think a cure was possible.” 
Steffensen’s research suggests that the process of a brain becoming addicted is similar to a driver overcorrecting a vehicle. When drugs and alcohol release unnaturally high levels of dopamine in the brain’s pleasure system, oxidative stress occurs in the brain.
Steffensen and his collaborators have found that the brain responds by generating a protein called BDNF (brain derived neurotrophic factor). This correction suppresses the brain’s normal production of dopamine long after someone comes down from a high. Not having enough dopamine is what causes the pains, distress and anxiety of withdrawal.
“The body attempts to compensate for unnatural levels of dopamine, but a pathological process occurs,” Steffensen said. “We think it all centers around a subset of neurons that ordinarily put the brakes on dopamine release.”
A group of undergraduate students work in Steffensen’s lab along with post-doctoral fellows and graduate students. Jennifer Blanchard Mabey, a graduate student in neuroscience, co-authored a paper about withdrawal that is in the current issue of The Journal of Neuroscience.
“It’s rewarding to see that your research efforts place another small piece in the enormous addiction puzzle,” said Mabey.
A separate study, co-authored by Steffensen and Ph.D. candidates Nathan Schilaty and David Hedges, explains how nicotine and alcohol interact in the brain.
“Addiction is a huge concern in our society and is very misunderstood,” Schilaty said. “Our research is helping us to formulate ideas on how we can better help these individuals through non-invasive and non-pharmacological means.”
Eun Young Jang, a post-doctoral fellow in Steffensen’s lab, authored a third paper for Addiction Biology describing the effects of cocaine addiction on the brain’s reward circuitry.
In these three research papers, dopamine is the common thread.
“I am optimistic that in the near future medical science will be able to reverse the brain changes in dopamine transmission that occur with drug dependence and return an ‘addict’ to a relatively normal state,” Steffensen said. “Then the addict will be in a better position to make rational decisions regarding their behavior and will be empowered to remain drug free.”

Addiction starts with an overcorrection in the brain

The National Institutes of Health has turned to neuroscientists at the nation’s most “Stone Cold Sober” university for help finding ways to treat drug and alcohol addiction.

Brigham Young University professor Scott Steffensen and his collaborators have published three new scientific papers that detail the brain mechanisms involved with addictive substances. And the NIH thinks Steffensen’s on the right track, as evidenced by a $2-million grant that will help fund projects in his BYU lab for the next five years.

“Addiction is a brain disease that could be treated like any other disease,” Steffensen said. “I wouldn’t be as motivated to do this research, or as passionate about the work, if I didn’t think a cure was possible.” 

Steffensen’s research suggests that the process of a brain becoming addicted is similar to a driver overcorrecting a vehicle. When drugs and alcohol release unnaturally high levels of dopamine in the brain’s pleasure system, oxidative stress occurs in the brain.

Steffensen and his collaborators have found that the brain responds by generating a protein called BDNF (brain derived neurotrophic factor). This correction suppresses the brain’s normal production of dopamine long after someone comes down from a high. Not having enough dopamine is what causes the pains, distress and anxiety of withdrawal.

“The body attempts to compensate for unnatural levels of dopamine, but a pathological process occurs,” Steffensen said. “We think it all centers around a subset of neurons that ordinarily put the brakes on dopamine release.”

A group of undergraduate students work in Steffensen’s lab along with post-doctoral fellows and graduate students. Jennifer Blanchard Mabey, a graduate student in neuroscience, co-authored a paper about withdrawal that is in the current issue of The Journal of Neuroscience.

“It’s rewarding to see that your research efforts place another small piece in the enormous addiction puzzle,” said Mabey.

A separate study, co-authored by Steffensen and Ph.D. candidates Nathan Schilaty and David Hedges, explains how nicotine and alcohol interact in the brain.

“Addiction is a huge concern in our society and is very misunderstood,” Schilaty said. “Our research is helping us to formulate ideas on how we can better help these individuals through non-invasive and non-pharmacological means.”

Eun Young Jang, a post-doctoral fellow in Steffensen’s lab, authored a third paper for Addiction Biology describing the effects of cocaine addiction on the brain’s reward circuitry.

In these three research papers, dopamine is the common thread.

“I am optimistic that in the near future medical science will be able to reverse the brain changes in dopamine transmission that occur with drug dependence and return an ‘addict’ to a relatively normal state,” Steffensen said. “Then the addict will be in a better position to make rational decisions regarding their behavior and will be empowered to remain drug free.”

Filed under addiction brain-derived neurotrophic factor opiates dopamine neuroscience science

97 notes

Research Links Alzheimer’s Disease to Brain Hyperactivity

Patients with Alzheimer’s disease run a high risk of seizures. While the amyloid-beta protein involved in the development and progression of Alzheimer’s seems the most likely cause for this neuronal hyperactivity, how and why this elevated activity takes place hasn’t yet been explained — until now.

image

A new study by Tel Aviv University researchers, published in Cell Reports, pinpoints the precise molecular mechanism that may trigger an enhancement of neuronal activity in Alzheimer’s patients, which subsequently damages memory and learning functions. The research team, led by Dr. Inna Slutsky of TAU’s Sackler Faculty of Medicine and Sagol School of Neuroscience, discovered that the amyloid precursor protein (APP), in addition to its well-known role in producing amyloid-beta, also constitutes the receptor for amyloid-beta. According to the study, the binding of amyloid-beta to pairs of APP molecules triggers a signalling cascade, which causes elevated neuronal activity.

Elevated activity in the hippocampus — the area of the brain that controls learning and memory — has been observed in patients with mild cognitive impairment and early stages of Alzheimer’s disease. Hyperactive hippocampal neurons, which precede amyloid plaque formation, have also been observed in mouse models with early onset Alzheimer’s disease. “These are truly exciting results,” said Dr. Slutsky. “Our work suggests that APP molecules, like many other known cell surface receptors, may modulate the transfer of information between neurons.”

With the understanding of this mechanism, the potential for restoring memory and protecting the brain is greatly increased.

Building on earlier research

The research project was launched five years ago, following the researchers’ discovery of the physiological role played by amyloid-beta, previously known as an exclusively toxic molecule. The team found that amyloid-beta is essential for the normal day-to-day transfer of information through the nerve cell networks. If the level of amyloid-beta is even slightly increased, it causes neuronal hyperactivity and greatly impairs the effective transfer of information between neurons.

In the search for the underlying cause of neuronal hyperactivity, TAU doctoral student Hilla Fogel and postdoctoral fellow Samuel Frere found that while unaffected “normal” neurons became hyperactive following a rise in amyloid-beta concentration, neurons lacking APP did not respond to amyloid-beta. “This finding was the starting point of a long journey toward decoding the mechanism of APP-mediated hyperactivity,” said Dr. Slutsky.

The researchers, collaborating with Prof. Joel Hirsch of TAU’s Faculty of Life Sciences, Prof. Dominic Walsh of Harvard University, and Prof. Ehud Isacoff of University of California Berkeley, harnessed a combination of cutting-edge high-resolution optical imaging, biophysical methods and molecular biology to examine APP-dependent signalling in neural cultures, brain slices, and mouse models. Using highly sensitive biophysical techniques based on fluorescence resonance energy transfer (FRET) between fluorescent proteins in close proximity, they discovered that APP exists as a dimer at presynaptic contacts, and that the binding of amyloid-beta triggers a change in the APP-APP interactions, leading to an increase in calcium flux and higher glutamate release — in other words, brain hyperactivity.

A new approach to protecting the brain

"We have now identified the molecular players in hyperactivity," said Dr. Slutsky. "TAU postdoctoral fellow Oshik Segev is now working to identify the exact spot where the amyloid-beta binds to APP and how it modifies the structure of the APP molecule. If we can change the APP structure and engineer molecules that interfere with the binding of amyloid-beta to APP, then we can break up the process leading to hippocampal hyperactivity. This may help to restore memory and protect the brain."

Previous studies by Prof. Lennart Mucke’s laboratory strongly suggest that a reduction in the expression level of “tau” (microtubule-associated protein), another key player in Alzheimer’s pathogenesis, rescues synaptic deficits and decreases abnormal brain activity in animal models. “It will be crucial to understand the missing link between APP and ‘tau’-mediated signalling pathways leading to hyperactivity of hippocampal circuits. If we can find a way to disrupt the positive signalling loop between amyloid-beta and neuronal activity, it may rescue cognitive decline and the conversion to Alzheimer’s disease,” said Dr. Slutsky.

(Source: aftau.org)

Filed under alzheimer's disease brain activity beta amyloid hippocampus hyperactivity neuroscience science

329 notes

Learn Dutch in your sleep
When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.
​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.
Quiet playback
In their trial, which has been published in the journal “Cerebral Cortex”, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.
The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.
Reinforcement of spontaneous activation
Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.
Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Learn Dutch in your sleep

When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.

​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.

Quiet playback

In their trial, which has been published in the journal “Cerebral Cortex, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.

The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.

Reinforcement of spontaneous activation

Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.

Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Filed under language sleep memory consolidation memory psychology neuroscience science

56 notes

(Image caption: The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Credit: Photo courtesy of Warren Grill.)
Watching Individual Neurons Respond to Magnetic Therapy
Engineers and neuroscientists at Duke University have developed a method to measure the response of an individual neuron to transcranial magnetic stimulation (TMS) of the brain. The advance will help researchers understand the underlying physiological effects of TMS — a procedure used to treat psychiatric disorders — and optimize its use as a therapeutic treatment.
TMS uses magnetic fields created by electric currents running through a wire coil to induce neural activity in the brain. With the flip of a switch, researchers can cause a hand to move or influence behavior. The technique has long been used in conjunction with other treatments in the hopes of improving treatment for conditions including depression and substance abuse.
While studies have demonstrated the efficacy of TMS, the technique’s physiological mechanisms have long been lost in a “black box.” Researchers know what goes into the treatment and the results that come out, but do not understand what’s happening in between.
Part of the reason for this mystery lies in the difficulty of measuring neural responses during the procedure; the comparatively tiny activity of a single neuron is lost in the tidal wave of current being generated by TMS. But the new study demonstrates a way to remove the proverbial haystack.
The results were published online June 29 in Nature Neuroscience.
“Nobody really knows what TMS is doing inside the brain, and given that lack of information, it has been very hard to interpret the outcomes of studies or to make therapies more effective,” said Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology at Duke. “We set out to try to understand what’s happening inside that black box by recording activity from single neurons during the delivery of TMS in a non-human primate. Conceptually, it was a very simple goal. But technically, it turned out to be very challenging.”
First, Grill and his colleagues in the Duke Institute for Brain Sciences (DIBS) engineered new hardware that could separate the TMS current from the neural response, which is thousands of times smaller. Once that was achieved, however, they discovered that their recording instrument was doing more than simply recording.
The TMS magnetic field was creating an electric current through the electrode measuring the neuron, raising the possibility that this current, instead of the TMS, was causing the neural response. The team had to characterize this current and make it small enough to ignore.
Finally, the researchers had to account for vibrations caused by the large current passing through the TMS device’s small coil of wire — a design problem in and of itself, because the typical TMS coil is too large for a non-human primate’s head. Because the coil is physically connected to the skull, the vibration was jostling the measurement electrode.
The researchers were able to compensate for each artifact, however, and see for the first time into the black box of TMS. They successfully recorded the action potentials of an individual neuron moments after TMS pulses and observed changes in its activity that significantly differed from activity following placebo treatments.
Grill worked with Angel Peterchev, assistant professor in psychiatry and behavioral science, biomedical engineering, and electrical and computer engineering, on the design of the coil. The team also included Michael Platt, director of DIBS and professor of neurobiology, and Mark Sommer, a professor of biomedical engineering.
They demonstrated that the technique could be recreated in different labs. “So, any modern lab working with non-human primates and electrophysiology can use this same approach in their studies,” said Grill.
The researchers hope that many others will take their method and use it to reveal the effects TMS has on neurons. Once a basic understanding is gained of how TMS interacts with neurons on an individual scale, its effects could be amplified and the therapeutic benefits of TMS increased.
“Studies with TMS have all been empirical,” said Grill. “You could look at the effects and change the coil, frequency, duration or many other variables. Now we can begin to understand the physiological effects of TMS and carefully craft protocols rather than relying on trial and error. I think that is where the real power of this research is going to come from.”

(Image caption: The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Credit: Photo courtesy of Warren Grill.)

Watching Individual Neurons Respond to Magnetic Therapy

Engineers and neuroscientists at Duke University have developed a method to measure the response of an individual neuron to transcranial magnetic stimulation (TMS) of the brain. The advance will help researchers understand the underlying physiological effects of TMS — a procedure used to treat psychiatric disorders — and optimize its use as a therapeutic treatment.

TMS uses magnetic fields created by electric currents running through a wire coil to induce neural activity in the brain. With the flip of a switch, researchers can cause a hand to move or influence behavior. The technique has long been used in conjunction with other treatments in the hopes of improving treatment for conditions including depression and substance abuse.

While studies have demonstrated the efficacy of TMS, the technique’s physiological mechanisms have long been lost in a “black box.” Researchers know what goes into the treatment and the results that come out, but do not understand what’s happening in between.

Part of the reason for this mystery lies in the difficulty of measuring neural responses during the procedure; the comparatively tiny activity of a single neuron is lost in the tidal wave of current being generated by TMS. But the new study demonstrates a way to remove the proverbial haystack.

The results were published online June 29 in Nature Neuroscience.

“Nobody really knows what TMS is doing inside the brain, and given that lack of information, it has been very hard to interpret the outcomes of studies or to make therapies more effective,” said Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology at Duke. “We set out to try to understand what’s happening inside that black box by recording activity from single neurons during the delivery of TMS in a non-human primate. Conceptually, it was a very simple goal. But technically, it turned out to be very challenging.”

First, Grill and his colleagues in the Duke Institute for Brain Sciences (DIBS) engineered new hardware that could separate the TMS current from the neural response, which is thousands of times smaller. Once that was achieved, however, they discovered that their recording instrument was doing more than simply recording.

The TMS magnetic field was creating an electric current through the electrode measuring the neuron, raising the possibility that this current, instead of the TMS, was causing the neural response. The team had to characterize this current and make it small enough to ignore.

Finally, the researchers had to account for vibrations caused by the large current passing through the TMS device’s small coil of wire — a design problem in and of itself, because the typical TMS coil is too large for a non-human primate’s head. Because the coil is physically connected to the skull, the vibration was jostling the measurement electrode.

The researchers were able to compensate for each artifact, however, and see for the first time into the black box of TMS. They successfully recorded the action potentials of an individual neuron moments after TMS pulses and observed changes in its activity that significantly differed from activity following placebo treatments.

Grill worked with Angel Peterchev, assistant professor in psychiatry and behavioral science, biomedical engineering, and electrical and computer engineering, on the design of the coil. The team also included Michael Platt, director of DIBS and professor of neurobiology, and Mark Sommer, a professor of biomedical engineering.

They demonstrated that the technique could be recreated in different labs. “So, any modern lab working with non-human primates and electrophysiology can use this same approach in their studies,” said Grill.

The researchers hope that many others will take their method and use it to reveal the effects TMS has on neurons. Once a basic understanding is gained of how TMS interacts with neurons on an individual scale, its effects could be amplified and the therapeutic benefits of TMS increased.

“Studies with TMS have all been empirical,” said Grill. “You could look at the effects and change the coil, frequency, duration or many other variables. Now we can begin to understand the physiological effects of TMS and carefully craft protocols rather than relying on trial and error. I think that is where the real power of this research is going to come from.”

Filed under transcranial magnetic stimulation neurons neuroscience science

302 notes

The secrets of children’s chatter: research shows boys and girls learn language differently
Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.
However, most research into understanding how these processes work has been carried out with adults.
“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.
“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”
A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.
They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.
The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.
Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.
The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.
She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.
“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”
Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects
(Image: Getty Images)

The secrets of children’s chatter: research shows boys and girls learn language differently

Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.

However, most research into understanding how these processes work has been carried out with adults.

“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.

“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”

A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.

They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.

The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.

Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.

The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.

She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.

“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”

Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects

(Image: Getty Images)

Filed under language memory children child development sex differences psychology neuroscience science

198 notes

Noninvasive brain control
Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.
MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.
This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.
Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.
The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.
Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.
Mining nature’s diversity
To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.
Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.
Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.
“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.
Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.
A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.
“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”
Restoring vision
Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.
Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.
This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.
Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Noninvasive brain control

Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.

MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.

This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.

Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.

The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.

Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.

Mining nature’s diversity

To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.

Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.

Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.

“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.

Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.

A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.

“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”

Restoring vision

Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.

Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.

This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.

Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Filed under optogenetics brain activity opsins vision neuroscience science

1,567 notes

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too. 
It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”
The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too.

It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”

The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Filed under zeitraffer phenomenon akinetopsia motion perception psychology neuroscience science

140 notes

Monkeys also believe in winning streaks
Humans have a well-documented tendency to see winning and losing streaks in situations that, in fact, are random. But scientists disagree about whether the “hot-hand bias” is a cultural artifact picked up in childhood or a predisposition deeply ingrained in the structure of our cognitive architecture.
Now in the first study in non-human primates of this systematic error in decision making, researchers find that monkeys also share our unfounded belief in winning and losing streaks. The results suggests that the penchant to see patterns that actually don’t exist may be inherited—an evolutionary adaptation that may have provided our ancestors a selective advantage when foraging for food in the wild, according to lead author Tommy Blanchard, a doctoral candidate in brain and cognitive sciences at the University of Rochester.
The cognitive bias may be difficult to override even in situations that are truly random. This inborn tendency to feel that we are on a roll or in a slump may help explain why gambling can be so alluring and why the stock market is so prone to wild swings, said coauthor Benjamin Hayden, assistant professor brain and cognitive sciences at the University of Rochester.
Hayden, Blanchard, and Andreas Wilke, an assistant professor of psychology at Clarkson University, reported their findings in the July issue of the Journal of Experimental Psychology: Animal Learning and Cognition.
To measure whether monkeys actually believe in winning streaks, the researchers had to create a computerized game that was so captivating monkeys would want to play for hours. “Luckily, monkeys love to gamble,” said Blanchard. So the team devised a fast-paced task in which each monkey could choose right or left and receive a reward when they guessed correctly.
The researchers created three types of play, two with clear patterns (the correct answer tended to repeat on one side or to alternate from side to side) and a third in which the lucky pick was completely random. Where clear patterns existed, the three rhesus monkeys in the study quickly guessed the correct sequence. But in the random scenarios, the monkeys continued to make choices as if they expected a “streak”. In other words, even when rewards were random, the monkeys favored one side.
The monkeys showed the hot-hand bias consistently over weeks of play and an average of 1,244 trials per condition. “They had lots and lots of opportunities to get over this bias, to learn and change, and yet they continued to show the same tendency,” said Blanchard.
So why do monkeys and humans share this false belief in a run of luck even when faced over and over with evidence that the results are random? The authors speculate that the distribution of food in the wild, which is not random, may be the culprit. “If you find a nice juicy beetle on the underside of a log, this is pretty good evidence that there might be a beetle in a similar location nearby, because beetles, like most food sources, tend to live near each other,” explained Hayden.
Evolution has also primed our brains to look for patterns, added Hayden. “We have this incredible drive to see patterns in the world, and we also have this incredible drive to learn. I think it’s very related to why we like music, and why we like to do crossword puzzles, Sudoku, and things like that. If there’s a pattern there, we’re on top of it. And if there may or may not be a pattern there, that’s even more interesting.”
Understanding the hot-hand bias could inform treatment for gambling addiction and provide insights for investors, said Hayden. “If a belief in winning streaks is hardwired, then we may want to look for more rigorous retaining for individuals who cannot control their gambling. And investors should keep in mind that humans have an inherited bias to believe that if a stock goes up one day, it will continue to go up.”
The results also could provide nuance to our understanding of free will, said Blanchard, who was drawn to the study of decision making during prior graduate training in philosophy. “Biases in our decision-making mechanisms, like this bias towards belief in winning and losing streaks, say something really deep about what sorts of creatures we are. We often like to think we make decisions based only on the information we’re conscious of. But we’re not always aware of why we make certain decisions or believe certain things.
“We’re a complex mix of biases and heuristics and statistical reasoning. When you put it all together, that’s how you get sophisticated behavior. We don’t know where a lot of these biases come from, but this study—and others like it—suggest many of them are due to cognitive mechanisms we share with our primate relatives,” said Blanchard.

Monkeys also believe in winning streaks

Humans have a well-documented tendency to see winning and losing streaks in situations that, in fact, are random. But scientists disagree about whether the “hot-hand bias” is a cultural artifact picked up in childhood or a predisposition deeply ingrained in the structure of our cognitive architecture.

Now in the first study in non-human primates of this systematic error in decision making, researchers find that monkeys also share our unfounded belief in winning and losing streaks. The results suggests that the penchant to see patterns that actually don’t exist may be inherited—an evolutionary adaptation that may have provided our ancestors a selective advantage when foraging for food in the wild, according to lead author Tommy Blanchard, a doctoral candidate in brain and cognitive sciences at the University of Rochester.

The cognitive bias may be difficult to override even in situations that are truly random. This inborn tendency to feel that we are on a roll or in a slump may help explain why gambling can be so alluring and why the stock market is so prone to wild swings, said coauthor Benjamin Hayden, assistant professor brain and cognitive sciences at the University of Rochester.

Hayden, Blanchard, and Andreas Wilke, an assistant professor of psychology at Clarkson University, reported their findings in the July issue of the Journal of Experimental Psychology: Animal Learning and Cognition.

To measure whether monkeys actually believe in winning streaks, the researchers had to create a computerized game that was so captivating monkeys would want to play for hours. “Luckily, monkeys love to gamble,” said Blanchard. So the team devised a fast-paced task in which each monkey could choose right or left and receive a reward when they guessed correctly.

The researchers created three types of play, two with clear patterns (the correct answer tended to repeat on one side or to alternate from side to side) and a third in which the lucky pick was completely random. Where clear patterns existed, the three rhesus monkeys in the study quickly guessed the correct sequence. But in the random scenarios, the monkeys continued to make choices as if they expected a “streak”. In other words, even when rewards were random, the monkeys favored one side.

The monkeys showed the hot-hand bias consistently over weeks of play and an average of 1,244 trials per condition. “They had lots and lots of opportunities to get over this bias, to learn and change, and yet they continued to show the same tendency,” said Blanchard.

So why do monkeys and humans share this false belief in a run of luck even when faced over and over with evidence that the results are random? The authors speculate that the distribution of food in the wild, which is not random, may be the culprit. “If you find a nice juicy beetle on the underside of a log, this is pretty good evidence that there might be a beetle in a similar location nearby, because beetles, like most food sources, tend to live near each other,” explained Hayden.

Evolution has also primed our brains to look for patterns, added Hayden. “We have this incredible drive to see patterns in the world, and we also have this incredible drive to learn. I think it’s very related to why we like music, and why we like to do crossword puzzles, Sudoku, and things like that. If there’s a pattern there, we’re on top of it. And if there may or may not be a pattern there, that’s even more interesting.”

Understanding the hot-hand bias could inform treatment for gambling addiction and provide insights for investors, said Hayden. “If a belief in winning streaks is hardwired, then we may want to look for more rigorous retaining for individuals who cannot control their gambling. And investors should keep in mind that humans have an inherited bias to believe that if a stock goes up one day, it will continue to go up.”

The results also could provide nuance to our understanding of free will, said Blanchard, who was drawn to the study of decision making during prior graduate training in philosophy. “Biases in our decision-making mechanisms, like this bias towards belief in winning and losing streaks, say something really deep about what sorts of creatures we are. We often like to think we make decisions based only on the information we’re conscious of. But we’re not always aware of why we make certain decisions or believe certain things.

“We’re a complex mix of biases and heuristics and statistical reasoning. When you put it all together, that’s how you get sophisticated behavior. We don’t know where a lot of these biases come from, but this study—and others like it—suggest many of them are due to cognitive mechanisms we share with our primate relatives,” said Blanchard.

Filed under hot-hand fallacy decision making primates gambling psychology neuroscience science

free counters