Neuroscience

Articles and news from the latest research reports.

Posts tagged learning

102 notes

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Filed under decision making predictions brain activity learning prefrontal cortex neuroscience science

466 notes

Toward a Molecular Explanation for Schizophrenia
Surprisingly little is known about schizophrenia. It was only recognized as a medical condition in the past few decades, and its exact causes remain unclear. Since there is no objective test for schizophrenia, its diagnosis is based on an assortment of reported symptoms. The standard treatment, antipsychotic medication, works less than half the time and becomes increasingly ineffective over time.
Now, Prof. Illana Gozes — the Lily and Avraham Gildor Chair for the Investigation of Growth Factors, the director of the Adams Super Center for Brain Studies at the Sackler Faculty of Medicine, and a member of the Sagol School of Neuroscience at Tel Aviv University — has discovered that an important cell-maintenance process called autophagy is reduced in the brains of schizophrenic patients. The findings, published in Nature’s Molecular Psychiatry, advance the understanding of schizophrenia and could enable the development of new diagnostic tests and drug treatments for the disease.
"We discovered a new pathway that plays a part in schizophrenia," said Prof. Gozes. "By identifying and targeting the proteins known to be involved in the pathway, we may be able to diagnose and treat the disease in new and more effective ways."
Graduate students Avia Merenlender-Wagner, Anna Malishkevich, and Zeev Shemer of TAU, Prof. Brian Dean and colleagues of the University of Melbourne, and Prof. Galila Agam and Joseph Levine of Ben Gurion University of the Negev and Beer Sheva’s Psychiatry Research Center and Mental Health Center collaborated on the research.
Mopping up
Autophagy is like the cell’s housekeeping service, cleaning up unnecessary and dysfunctional cellular components. The process — in which a membrane engulfs and consumes the clutter — is essential to maintaining cellular health. But when autophagy is blocked, it can lead to cell death. Several studies have tentatively linked blocked autophagy to the death of brain cells seen in Alzheimer’s disease.
Brain-cell death also occurs in schizophrenics, so Prof. Gozes and her colleagues set out to see if blocked autophagy could be involved in the progression of that condition as well. They found RNA evidence of decreased levels of the protein beclin 1 in the hippocampus of schizophrenia patients, a brain region central to learning and memory. Beclin 1 is central to initiating autophagy — its deficit suggests that the process is indeed blocked in schizophrenia patients. Developing drugs to boost beclin 1 levels and restart autophagy could offer a new way to treat schizophrenia, the researchers say.
"It is all about balance," said Prof Gozes. "Paucity in beclin 1 may lead to decreased autophagy and enhanced cell death. Our research suggests that normalizing beclin 1 levels in schizophrenia patients could restore balance and prevent harmful brain-cell death."
Next, the researchers looked at protein levels in the blood of schizophrenia patients. They found no difference in beclin 1 levels, suggesting that the deficit is limited to the hippocampus. But the researchers also found increased levels of another protein, activity-dependent neuroprotective protein (ADNP), discovered by Prof. Gozes and shown to be essential for brain formation and function, in the patients’ white blood cells. Previous studies have shown that ADNP is also deregulated in the brains of schizophrenia patients.
The researchers think the body may boost ADNP levels to protect the brain when beclin 1 levels fall and autophagy is derailed. ADNP, then, could potentially serve as a biomarker, allowing schizophrenia to be diagnosed with a simple blood test.
An illuminating discovery
To further explore the involvement of ADNP in autophagy, the researchers ran a biochemical test on the brains of mice. The test showed that ADNP interacts with LC3, another key protein regulating autophagy — an interaction predicted by previous studies. In light of the newfound correlation between autophagy and schizophrenia, they believe that this interaction may constitute part of the mechanism by which ADNP protects the brain.
Prof. Gozes discovered ADNP in 1999 and carved a protein fragment, NAP, from it. NAP mimics the protein nerve cell protecting properties. In follow-up studies Prof. Gozes helped develop the drug candidate davunetide (NAP). In Phase II clinical trials, davunetide (NAP) improved the ability of schizophrenic patients to cope with daily life. A recent collaborative effort by Prof. Gozes and Dr. Sandra Cardoso and Dr. Raquel Esteves showed that NAP improved autophagy in cultures of brain-like cells. The current study further shows that NAP facilitates the interaction of ADNP and LC3, possibly accounting for NAP’s results in schizophrenia patients. The researchers hope NAP will be just the first of their many discoveries to improve understanding and treatment of schizophrenia.
(Image: Shutterstock)

Toward a Molecular Explanation for Schizophrenia

Surprisingly little is known about schizophrenia. It was only recognized as a medical condition in the past few decades, and its exact causes remain unclear. Since there is no objective test for schizophrenia, its diagnosis is based on an assortment of reported symptoms. The standard treatment, antipsychotic medication, works less than half the time and becomes increasingly ineffective over time.

Now, Prof. Illana Gozes — the Lily and Avraham Gildor Chair for the Investigation of Growth Factors, the director of the Adams Super Center for Brain Studies at the Sackler Faculty of Medicine, and a member of the Sagol School of Neuroscience at Tel Aviv University — has discovered that an important cell-maintenance process called autophagy is reduced in the brains of schizophrenic patients. The findings, published in Nature’s Molecular Psychiatry, advance the understanding of schizophrenia and could enable the development of new diagnostic tests and drug treatments for the disease.

"We discovered a new pathway that plays a part in schizophrenia," said Prof. Gozes. "By identifying and targeting the proteins known to be involved in the pathway, we may be able to diagnose and treat the disease in new and more effective ways."

Graduate students Avia Merenlender-Wagner, Anna Malishkevich, and Zeev Shemer of TAU, Prof. Brian Dean and colleagues of the University of Melbourne, and Prof. Galila Agam and Joseph Levine of Ben Gurion University of the Negev and Beer Sheva’s Psychiatry Research Center and Mental Health Center collaborated on the research.

Mopping up

Autophagy is like the cell’s housekeeping service, cleaning up unnecessary and dysfunctional cellular components. The process — in which a membrane engulfs and consumes the clutter — is essential to maintaining cellular health. But when autophagy is blocked, it can lead to cell death. Several studies have tentatively linked blocked autophagy to the death of brain cells seen in Alzheimer’s disease.

Brain-cell death also occurs in schizophrenics, so Prof. Gozes and her colleagues set out to see if blocked autophagy could be involved in the progression of that condition as well. They found RNA evidence of decreased levels of the protein beclin 1 in the hippocampus of schizophrenia patients, a brain region central to learning and memory. Beclin 1 is central to initiating autophagy — its deficit suggests that the process is indeed blocked in schizophrenia patients. Developing drugs to boost beclin 1 levels and restart autophagy could offer a new way to treat schizophrenia, the researchers say.

"It is all about balance," said Prof Gozes. "Paucity in beclin 1 may lead to decreased autophagy and enhanced cell death. Our research suggests that normalizing beclin 1 levels in schizophrenia patients could restore balance and prevent harmful brain-cell death."

Next, the researchers looked at protein levels in the blood of schizophrenia patients. They found no difference in beclin 1 levels, suggesting that the deficit is limited to the hippocampus. But the researchers also found increased levels of another protein, activity-dependent neuroprotective protein (ADNP), discovered by Prof. Gozes and shown to be essential for brain formation and function, in the patients’ white blood cells. Previous studies have shown that ADNP is also deregulated in the brains of schizophrenia patients.

The researchers think the body may boost ADNP levels to protect the brain when beclin 1 levels fall and autophagy is derailed. ADNP, then, could potentially serve as a biomarker, allowing schizophrenia to be diagnosed with a simple blood test.

An illuminating discovery

To further explore the involvement of ADNP in autophagy, the researchers ran a biochemical test on the brains of mice. The test showed that ADNP interacts with LC3, another key protein regulating autophagy — an interaction predicted by previous studies. In light of the newfound correlation between autophagy and schizophrenia, they believe that this interaction may constitute part of the mechanism by which ADNP protects the brain.

Prof. Gozes discovered ADNP in 1999 and carved a protein fragment, NAP, from it. NAP mimics the protein nerve cell protecting properties. In follow-up studies Prof. Gozes helped develop the drug candidate davunetide (NAP). In Phase II clinical trials, davunetide (NAP) improved the ability of schizophrenic patients to cope with daily life. A recent collaborative effort by Prof. Gozes and Dr. Sandra Cardoso and Dr. Raquel Esteves showed that NAP improved autophagy in cultures of brain-like cells. The current study further shows that NAP facilitates the interaction of ADNP and LC3, possibly accounting for NAP’s results in schizophrenia patients. The researchers hope NAP will be just the first of their many discoveries to improve understanding and treatment of schizophrenia.

(Image: Shutterstock)

Filed under schizophrenia autophagy hippocampus memory learning beclin 1 neuroscience science

500 notes

Take note students: Mice that ‘cram’ for exams remember less
It’s been more than 100 years since German psychologist Hermann Ebbinghaus determined that learning interspersed with rest created longer-lasting memories than so-called cramming, or learning without rest intervals.


Yet it’s only much more recently that scientists have begun to understand the underlying molecular mechanisms for this phenomenon. In a study published Monday in the journal PNAS, researchers examined the physical changes in the brain cells of mice while “training” their eyes to keep track of a moving image.
Researchers examined the horizontal optokinetic response, or HOKR, in mice to determine what rest interval was best suited to increasing their memory.
HOKR is what makes it possible for a rider in a train to visually track the moving scenery. While the process is unconscious, it involves frequent, minute eye movements.
Mice were fastened to a device that immobilized their heads and then were made to look at a revolving, checkered image that triggered the eye response. A high speed camera was used to determine when the tracking began and when it stopped.
While the eyes of lab mice are initially unable to track the revolving image at a high speed, they eventually adapt to faster and faster movement. This tracking ability is retained for a period of time before it is forgotten.
Some of the mice were allowed to rest between training sessions, while others were not. Researchers noted clear differences between the mice that were given rest time “spacing” and those that received no breaks, or “massed training.”
"One hour of spacing produced the highest memory retention at 24 hours, which lasted for one month," wrote lead study author Wajeeha Aziz, a molecular physiologist at the National Institute for Physiological Sciences in Okazaki, Japan, and her colleagues.
"Surprisingly, massed training also produced long-term memory…. However, this occurred slowly over days, and the memory lasted for only one week."
Researchers compared brain tissue from the two groups of trained mice and with those of mice that received no training. They found that both groups of trained mice had reduced synapses in a specific type of nerve cell, Purkinje neurons.
However, spacing the training appeared to make these structural changes in synapses occur more quickly, the authors said. 
"Further investigations are needed to elucidate the precise molecular mechanisms that regulate the temporal features of long-lasting memory, and the structural modifications of synapses provides an indispensable readout for such studies," the authors concluded.

Take note students: Mice that ‘cram’ for exams remember less

It’s been more than 100 years since German psychologist Hermann Ebbinghaus determined that learning interspersed with rest created longer-lasting memories than so-called cramming, or learning without rest intervals.

Yet it’s only much more recently that scientists have begun to understand the underlying molecular mechanisms for this phenomenon. In a study published Monday in the journal PNAS, researchers examined the physical changes in the brain cells of mice while “training” their eyes to keep track of a moving image.

Researchers examined the horizontal optokinetic response, or HOKR, in mice to determine what rest interval was best suited to increasing their memory.

HOKR is what makes it possible for a rider in a train to visually track the moving scenery. While the process is unconscious, it involves frequent, minute eye movements.

Mice were fastened to a device that immobilized their heads and then were made to look at a revolving, checkered image that triggered the eye response. A high speed camera was used to determine when the tracking began and when it stopped.

While the eyes of lab mice are initially unable to track the revolving image at a high speed, they eventually adapt to faster and faster movement. This tracking ability is retained for a period of time before it is forgotten.

Some of the mice were allowed to rest between training sessions, while others were not. Researchers noted clear differences between the mice that were given rest time “spacing” and those that received no breaks, or “massed training.”

"One hour of spacing produced the highest memory retention at 24 hours, which lasted for one month," wrote lead study author Wajeeha Aziz, a molecular physiologist at the National Institute for Physiological Sciences in Okazaki, Japan, and her colleagues.

"Surprisingly, massed training also produced long-term memory…. However, this occurred slowly over days, and the memory lasted for only one week."

Researchers compared brain tissue from the two groups of trained mice and with those of mice that received no training. They found that both groups of trained mice had reduced synapses in a specific type of nerve cell, Purkinje neurons.

However, spacing the training appeared to make these structural changes in synapses occur more quickly, the authors said. 

"Further investigations are needed to elucidate the precise molecular mechanisms that regulate the temporal features of long-lasting memory, and the structural modifications of synapses provides an indispensable readout for such studies," the authors concluded.

Filed under memory synaptic plasticity learning LTM neuroscience science

233 notes

The logistics of learning

Learning requires constant reconfiguration of the connections between nerve cells. Two new studies now yield new insights into the molecular mechanisms that underlie the learning process.

image

Learning and memory are made possible by the incessant reorganization of nerve connections in the brain. Both processes are based on targeted modifications of the functional interfaces between nerve cells – the so-called synapses – which alter their form, molecular composition and functional properties. In effect, connections between cells that are frequently co-activated together are progressively altered so that they respond to subsequent signals more rapidly and more strongly. This way, information can be encoded in patterns of synaptic activity and promptly recalled when needed. The converse is also true: learned behaviors can be lost by disuse, because inactive synapses are themselves less likely to transmit an incoming impulse, leading to the decay of such connections.

How exactly an individual synapse is altered without simultaneously affecting nearby nerve cells or other synapses on the same cell is a question that is central to Michael Kiebler’s research. Kiebler, a biochemist, holds the Chair of Cell Biology in the Faculty of Medicine at LMU. “It is now clear that the changes take place in the cell that is stimulated by synaptic input – the post-synaptic cell – and in particular in its so-called dendritic spines,” he says, “and particles that are known as “neuronal RNA granules” deliver mRNA molecules to these sites“. These mRNAs represent the blueprints for the synthesis of the proteins responsible for reconfiguring the synapses. Kiebler‘s team has developed a model, which postulates that these granules migrate from dendrite to dendrite, and release their mRNAs specifically at sites that are repeatedly activated. This would ensure that the relevant proteins are synthesized only where they are needed within the cell.

In spite of the potential significance of the model, the molecular mechanisms required for its realization have remained obscure. mRNA-binding proteins, including Staufen2 (Stau2) and Barentsz, are essential components of the granules, and Kiebler’s team, in collaboration with Giulio Superti-Furga’s group (CeMM, Vienna), have now used specific antibodies to isolate and characterize neuronal granules that contain either Stau2 or Barentsz.

Surprising diversity

It has generally been assumed that all neuronal RNA granules have essentially similar compositions. However, the new findings indicate that this is not the case. A comparison between Stau2- and Barentsz-containing granules reveals that they differ in about two-thirds of their proteins. “This suggests that the RNA granules are highly heterogeneous and dynamic in their composition,” says Kiebler. “And that makes sense to me, because it would mean that the granules can perform different functions depending on which mRNAs they carry.” Furthermore, the researchers have shown that the granules contain virtually none of the factors known to promote the translation of mRNAs into proteins. On the contrary, they include many molecules that repress protein synthesis. This in turn implies that the process of mRNA transport is uncoupled from the subsequent production of the proteins they encode.

In a complementary study, Kiebler’s team also characterized the mRNA cargoes associated with the granules. “Until now, none of the RNA molecules present in Stau2-containing granules in mammalian nerve cells had been defined, but we have now been able to identify many specific mRNAs,” Kiebler explains. Further experiments revealed that Stau2 stabilizes the mRNAs, allowing them to be used more often for the production of proteins. Moreover, the researchers have shown that specialized structures within these mRNAs, called “Staufen-Recognized Structures” (SRS), are essential for their recognition and stabilization by Stau2. “This allows us to propose a molecular mechanism for RNA recognition for the first time,” says Kiebler.

Taken together, the two new papers (1, 2) provide novel insights into the molecular mechanisms that underlie learning and memory. The scientists now want to dissect out the details in future studies. “In the long term, we are particularly interested in the question of how an activated synapse can alter the state of the granules and induce the production of protein,” Kiebler notes. It is becoming increasingly clear that RNA-binding proteins play essential roles in nerve cells. Disruption of their action can lead to neurodegenerative diseases and neurological dysfunction. Clearly, not only classical conditions such as Alzheimer‘s or Parkinson’s disease, in which RNA-binding proteins are always involved, but also cognitive defects or age-associated impairment of learning ability must be viewed in this context,” Kiebler concludes.

(Source: en.uni-muenchen.de)

Filed under neurodegenerative diseases memory learning neurons synapses protein synthesis neuroscience science

147 notes

Brain research provides insight into language learning

Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

image

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.  

The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”

“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”

Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.

“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.

“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.   

“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”

While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said. 

“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.

Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.

(Source: asunews.asu.edu)

Filed under mirror neuron system language acquisition language learning plasticity neuroscience science

108 notes

Sniffing Out Danger: Rutgers Scientists Say Fearful Memories Can Trigger Heightened Sense of Smell

Most people – including scientists – assumed we can’t just sniff out danger.

It was thought that we become afraid of an odor – such as leaking gas – only after information about a scary scent is processed by our brain.

image

But neuroscientists at Rutgers University studying the olfactory – sense of smell – system in mice have discovered that this fear reaction can occur at the sensory level, even before the brain has the opportunity to interpret that the odor could mean trouble.

In a new study published today in Science, John McGann, associate professor of behavioral and systems neuroscience in the Department of Psychology, and his colleagues, report that neurons in the noses of laboratory animals reacted more strongly to threatening odors before the odor message was sent to the brain.

“What is surprising is that we tend to think of learning as something that only happens deep in the brain after conscious awareness,” says McGann whose laboratory studies the sense of smell. “But now we see how the nervous system can become especially sensitive to threatening stimuli and that fear-learning can affect the signals passing from sensory organs to the brain.”

McGann and students Marley Kass and Michelle Rosenthal made this discovery by using light to observe activity in the brains of genetically engineered mice through a window in the mouse’s skull. They found that those mice that received an electric shock simultaneously with a specific odor showed an enhanced response to the smell in the cells in the nose, before the message was delivered to the neurons in the brain.

This new research – which indicates that fearful memories can influence the senses – could help to better understand conditions like Post Traumatic Stress Disorder, in which feelings of anxiety and fear exist even though an individual is no longer in danger.

“We know that anxiety disorders like PTSD can sometimes be triggered by smell, like the smell of diesel exhaust for a soldier,” says McGann who received funding from the National Institute of Mental Health and the National Institute on Deafness and Other Communication Disorders for this research. “What this study does is gives us a new way of thinking about how this might happen.”

In their study, the scientists also discovered a heightened sensitivity to odors in the mice traumatized by shock. When these mice smelled the odor associated with the electrical shocks, the amount of neurotransmitter – chemicals that carry communications between nerve cells – released from the olfactory nerve into the brain was as big as if the odor were four times stronger than it actually was.

This created mice whose brains were hypersensitive to the fear-associated odors. Before now, scientists did not think that reward or punishment could influence how the sensory organs process information.

The next step in the continuing research, McGann says, is to determine whether the hypersensitivity to threatening odors can be reversed by using exposure therapy to teach the mice that the electrical shock is no longer associated with a specific odor. This could help develop a better understanding of fear learning that might someday lead to new therapeutic treatments for anxiety disorders in humans, he says.

(Source: news.rutgers.edu)

Filed under olfactory system memory fear learning anxiety disorders neuroscience science

321 notes

Even when test scores go up, some cognitive abilities don’t

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

image

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.

“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, just by doing a lot of studying and being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

Measuring cognition

This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.

The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.

The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.

In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.

“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.

Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.

However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills. 

The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.

Implications for education

Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.

He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.

“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.

Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.

(Source: web.mit.edu)

Filed under crystallized intelligence fluid intelligence cognition learning psychology neuroscience science

272 notes

Balancing old and new skills
To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.
A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.
The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.
“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceeding of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”
As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.
Noisy circuits
As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.
This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once.  Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.
“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.
In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.
That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.
Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”
This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.
“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.
Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.
The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.
Like riding a bike
The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.
“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.
However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.
The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

Balancing old and new skills

To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.

A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.

The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.

“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceeding of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”

As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.

Noisy circuits

As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.

This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once.  Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.

“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.

In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.

That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.

Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”

This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.

“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.

Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.

The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.

Like riding a bike

The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.

“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.

However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.

The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

Filed under plasticity memory learning neurons neural circuits neuroscience science

217 notes

Study finds crocodiles are cleverer than previously thought
Turns out the crocodile can be a shrewd hunter himself. A University of Tennessee, Knoxville, researcher has found that some crocodiles use lures to hunt their prey.
Vladimir Dinets, a research assistant professor in the Department of Psychology, is the first to observe two crocodilian species—muggers and American alligators—using twigs and sticks to lure birds, particularly during nest-building time.
The research is published in the current edition of Ethology, Ecology and Evolution. Dinets’ research is the first report of tool use by any reptiles, and also the first known case of predators timing the use of lures to a seasonal behavior of the prey—nest-building.
Dinets first observed the behavior in 2007 when he spotted crocodiles lying in shallow water along the edge of a pond in India with small sticks or twigs positioned across their snouts. The behavior potentially fooled nest-building birds wading in the water for sticks into thinking the sticks were floating on the water. The crocodiles remained still for hours and if a bird neared the stick, they would lunge.
To see if the stick-displaying was a form of clever predation, Dinets and his colleagues performed systematic observations of the reptiles for one year at four sites in Louisiana, including two rookery and two nonrookery sites. A rookery is a bird breeding ground. The researchers observed a significant increase in alligators displaying sticks on their snouts from March to May, the time birds were building nests. Specifically, the reptiles in rookeries had sticks on their snouts during and after the nest-building season. At non-rookery sites, the reptiles used lures during the nest-building season.
"This study changes the way crocodiles have historically been viewed," said Dinets. "They are typically seen as lethargic, stupid and boring but now they are known to exhibit flexible multimodal signaling, advanced parental care and highly coordinated group hunting tactics."
The observations could mean the behavior is more widespread within the reptilian group and could also shed light on how crocodiles’ extinct relatives—dinosaurs—behaved.
"Our research provides a surprising insight into previously unrecognized complexity of extinct reptile behavior," said Dinets. "These discoveries are interesting not just because they show how easy it is to underestimate the intelligence of even relatively familiar animals, but also because crocodilians are a sister taxon of dinosaurs and flying reptiles."
Dinets collaborated with J.C and J.D. Brueggen from the St. Augustine Alligator Farm Zoological Park in St. Augustine, Fla. More of his crocodile research can be found in his book “Dragon Songs.”

Study finds crocodiles are cleverer than previously thought

Turns out the crocodile can be a shrewd hunter himself. A University of Tennessee, Knoxville, researcher has found that some crocodiles use lures to hunt their prey.

Vladimir Dinets, a research assistant professor in the Department of Psychology, is the first to observe two crocodilian species—muggers and American alligators—using twigs and sticks to lure birds, particularly during nest-building time.

The research is published in the current edition of Ethology, Ecology and Evolution. Dinets’ research is the first report of tool use by any reptiles, and also the first known case of predators timing the use of lures to a seasonal behavior of the prey—nest-building.

Dinets first observed the behavior in 2007 when he spotted crocodiles lying in shallow water along the edge of a pond in India with small sticks or twigs positioned across their snouts. The behavior potentially fooled nest-building birds wading in the water for sticks into thinking the sticks were floating on the water. The crocodiles remained still for hours and if a bird neared the stick, they would lunge.

To see if the stick-displaying was a form of clever predation, Dinets and his colleagues performed systematic observations of the reptiles for one year at four sites in Louisiana, including two rookery and two nonrookery sites. A rookery is a bird breeding ground. The researchers observed a significant increase in alligators displaying sticks on their snouts from March to May, the time birds were building nests. Specifically, the reptiles in rookeries had sticks on their snouts during and after the nest-building season. At non-rookery sites, the reptiles used lures during the nest-building season.

"This study changes the way crocodiles have historically been viewed," said Dinets. "They are typically seen as lethargic, stupid and boring but now they are known to exhibit flexible multimodal signaling, advanced parental care and highly coordinated group hunting tactics."

The observations could mean the behavior is more widespread within the reptilian group and could also shed light on how crocodiles’ extinct relatives—dinosaurs—behaved.

"Our research provides a surprising insight into previously unrecognized complexity of extinct reptile behavior," said Dinets. "These discoveries are interesting not just because they show how easy it is to underestimate the intelligence of even relatively familiar animals, but also because crocodilians are a sister taxon of dinosaurs and flying reptiles."

Dinets collaborated with J.C and J.D. Brueggen from the St. Augustine Alligator Farm Zoological Park in St. Augustine, Fla. More of his crocodile research can be found in his book “Dragon Songs.”

Filed under crocodiles evolution intelligence learning alligators tool use neuroscience science

free counters