Neuroscience

Articles and news from the latest research reports.

Posts tagged learning

105 notes

Mathematical model shows how the brain remains stable during learning
Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.
Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.
A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.
To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.
ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.
In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.
By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”
"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.
The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.
"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Mathematical model shows how the brain remains stable during learning

Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.

Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.

A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.

To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.

ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.

In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.

By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”

"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.

The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.

"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Filed under learning plasticity neural networks mathematical model neuroscience science

228 notes

Mental Rest and Reflection Boost Learning
A new study, which may have implications for approaches to education, finds that brain mechanisms engaged when people allow their minds to rest and reflect on things they’ve learned before may boost later learning.
Scientists have already established that resting the mind, as in daydreaming, helps strengthen memories of events and retention of information. In a new twist, researchers at The University of Texas at Austin have shown that the right kind of mental rest, which strengthens and consolidates memories from recent learning tasks, helps boost future learning.
The results appear online this week in the journal Proceedings of the National Academy of Sciences.
Margaret Schlichting, a graduate student researcher, and Alison Preston, an associate professor of psychology and neuroscience, gave participants in the study two learning tasks in which participants were asked to memorize different series of associated photo pairs. Between the tasks, participants rested and could think about anything they chose, but brain scans found that the ones who used that time to reflect on what they had learned earlier in the day fared better on tests pertaining to what they learned later, especially where small threads of information between the two tasks overlapped. Participants seemed to be making connections that helped them absorb information later on, even if it was only loosely related to something they learned before.
"We’ve shown for the first time that how the brain processes information during rest can improve future learning," says Preston. "We think replaying memories during rest makes those earlier memories stronger, not just impacting the original content, but impacting the memories to come.
Until now, many scientists assumed that prior memories are more likely to interfere with new learning. This new study shows that at least in some situations, the opposite is true.
"Nothing happens in isolation," says Preston. "When you are learning something new, you bring to mind all of the things you know that are related to that new information. In doing so, you embed the new information into your existing knowledge."
Preston described how this new understanding might help teachers design more effective ways of teaching. Imagine a college professor is teaching students about how neurons communicate in the human brain, a process that shares some common features with an electric power grid. The professor might first cue the students to remember things they learned in a high school physics class about how electricity is conducted by wires.
"A professor might first get them thinking about the properties of electricity," says Preston. "Not necessarily in lecture form, but by asking questions to get students to recall what they already know. Then, the professor might begin the lecture on neuronal communication. By prompting them beforehand, the professor might help them reactivate relevant knowledge and make the new material more digestible for them."
This research was conducted with adult participants. The researchers will next study whether a similar dynamic is at work with children.

Mental Rest and Reflection Boost Learning

A new study, which may have implications for approaches to education, finds that brain mechanisms engaged when people allow their minds to rest and reflect on things they’ve learned before may boost later learning.

Scientists have already established that resting the mind, as in daydreaming, helps strengthen memories of events and retention of information. In a new twist, researchers at The University of Texas at Austin have shown that the right kind of mental rest, which strengthens and consolidates memories from recent learning tasks, helps boost future learning.

The results appear online this week in the journal Proceedings of the National Academy of Sciences.

Margaret Schlichting, a graduate student researcher, and Alison Preston, an associate professor of psychology and neuroscience, gave participants in the study two learning tasks in which participants were asked to memorize different series of associated photo pairs. Between the tasks, participants rested and could think about anything they chose, but brain scans found that the ones who used that time to reflect on what they had learned earlier in the day fared better on tests pertaining to what they learned later, especially where small threads of information between the two tasks overlapped. Participants seemed to be making connections that helped them absorb information later on, even if it was only loosely related to something they learned before.

"We’ve shown for the first time that how the brain processes information during rest can improve future learning," says Preston. "We think replaying memories during rest makes those earlier memories stronger, not just impacting the original content, but impacting the memories to come.

Until now, many scientists assumed that prior memories are more likely to interfere with new learning. This new study shows that at least in some situations, the opposite is true.

"Nothing happens in isolation," says Preston. "When you are learning something new, you bring to mind all of the things you know that are related to that new information. In doing so, you embed the new information into your existing knowledge."

Preston described how this new understanding might help teachers design more effective ways of teaching. Imagine a college professor is teaching students about how neurons communicate in the human brain, a process that shares some common features with an electric power grid. The professor might first cue the students to remember things they learned in a high school physics class about how electricity is conducted by wires.

"A professor might first get them thinking about the properties of electricity," says Preston. "Not necessarily in lecture form, but by asking questions to get students to recall what they already know. Then, the professor might begin the lecture on neuronal communication. By prompting them beforehand, the professor might help them reactivate relevant knowledge and make the new material more digestible for them."

This research was conducted with adult participants. The researchers will next study whether a similar dynamic is at work with children.

Filed under learning hippocampus mental rest memory psychology neuroscience science

189 notes

Study finds action video games bolster sensorimotor skills

A study led by University of Toronto psychology researchers has found that people who play action video games such as Call of Duty or Assassin’s Creed seem to learn a new sensorimotor skill more quickly than non-gamers do.

image

A new sensorimotor skill, such as learning to ride a bike or typing, often requires a new pattern of coordination between vision and motor movement. With such skills, an individual generally moves from novice performance, characterized by a low degree of coordination, to expert performance, marked by a high degree of coordination. As a result of successful sensorimotor learning, one comes to perform these tasks efficiently and perhaps even without consciously thinking about them.

“We wanted to understand if chronic video game playing has an effect on sensorimotor control, that is, the coordinated function of vision and hand movement,” said graduate student Davood Gozli, who led the study with supervisor Jay Pratt.

To find out, they set up two experiments. In the first, 18 gamers (those who played a first-person shooter game at least three times per week for at least two hours each time in the previous six months) and 18 non-gamers (who had little or no video game use in the past two years) performed a manual tracking task. Using a computer mouse, they were instructed to keep a small green square cursor at the centre of a white square moving target which moved in a very complicated pattern that repeated itself. The task probes sensorimotor control, because participants see the target movement and try to coordinate their hand movements with what they see.

In the early stages of doing the tasks, the gamers’ performance was not significantly better than non-gamers. “This suggests that while chronically playing action video games requires constant motor control, playing these games does not give gamers a reliable initial advantage in new and unfamiliar sensorimotor tasks,” said Gozli.

By the end of the experiment, all participants performed better as they learned the complex pattern of the target. The gamers, however, were significantly more accurate in following the repetitive motion than the non-gamers. “This is likely due to the gamers’ superior ability in learning a novel sensorimotor pattern, that is, their gaming experience enabled them to learn better than the non-gamers.”

In the next experiment, the researchers wanted to test whether the superior performance of the gamers was indeed a result of learning rather than simply having better sensorimotor control. To eliminate the learning component of the experiment, they required participants to again track a moving dot, but in this case the patterns of motion changed throughout the experiment. The result this time: neither the gamers nor the non-gamers improved as time went by, confirming that learning was playing a key role and the gamers were learning better.

One of the benefits of playing action games may be an enhanced ability to precisely learn the dynamics of new sensorimotor tasks. Such skills are key, for example, in laparoscopic surgery which involves high precision manual control of remote surgery tools through a computer interface.

(Source: media.utoronto.ca)

Filed under video games motor movement vision learning eye-hand coordination neuroscience science

95 notes

(Image caption: The complex shape of individual oligodendrocytes (OLs) and myelin in adult mice injected with tamoxifen. Credit: Sarah Jolly)
Myelin vital for learning new practical skills
New evidence of myelin’s essential role in learning and retaining new practical skills, such as playing a musical instrument, has been uncovered by UCL research. Myelin is a fatty substance that insulates the brain’s wiring and is a major constituent of ‘white matter’. It is produced by the brain and spinal cord into early adulthood as it is needed for many developmental processes, and although earlier studies of human white matter hinted at its involvement in skill learning, this is the first time it has been confirmed experimentally.
The study in mice, published in Science today, shows that new myelin must be made each time a skill is learned later in life and the structure of the brain’s white matter changes during new practical activities by increasing the number of myelin-producing cells. Furthermore, the team say once a new skill has been learnt, it is retained even after myelin production stops. These discoveries could prove important in finding ways to stimulate and improve learning, and in understanding myelin’s involvement in other brain processes, such as in cognition.
For a child to learn to walk or an adult to master a new skill such as juggling, new brain circuit activity is needed and new connections are made across large distances and at high speeds between different parts of the brain and spinal cord. For this, electrical signals fire between neurons connected by “axons” – thread-like extensions of their outer surfaces which can be viewed as the ‘wire’ in the electric circuit. When new signals fire repeatedly along axons, the connections between the neurons strengthen, making them easier to fire in the same pattern in future. Neighbouring myelin-producing cells called oligodendrocytes (OLs) recognise the repeating signal and wrap myelin around the active circuit wiring. It is this activity-driven insulation that the team identified as essential for learning.
The team demonstrated that young adult mice need to make myelin to learn new motor skills but that new myelin does not need to be produced to recall and perform a pre-learned skill. They tested the ability of mice to learn to run on a complex wheel with irregularly spaced rungs. The study looked at thirty-six normal mice and thirty-two mice with a drug-controlled genetic switch to prevent new OLs and myelin from being made. They found the mice that were prevented from producing new myelin could not master the complex wheel, whereas those that could produce myelin did learn, with differences between the two groups’ abilities seen after only two hours of practice.
A second experiment looked at mice that were first allowed to learn to run on the complex wheel before being treated with the drug to prevent further myelin production. When the mice were later re-introduced to the complex wheel, they were immediately able to run at top speed without having to spend time re-learning. This shows that the inability to make new myelin did not affect the mouse’s running ability and that new myelin is not required to remember and perform a skill once learned; it is required only during the initial learning phase.
Lead researcher, Professor Bill Richardson, Director of the UCL Wolfson Institute for Biomedical Research, said: “From earlier studies of human white matter using advanced MRI technology, we thought OLs and myelin might be involved in some way in skill learning, so we decided to attack this idea experimentally. We were surprised how quickly we saw differences in the ability of mice from each group to learn how to run on complex wheel, which shows just how fast the brain can respond to wrap newly-activated circuits in myelin and how this improves learning. This rapid response suggests that a number of alternative axon pathways might already exist in the brain that could be used to drive a particular sequence of movements, but it quickly works out which of those circuits is most efficient and both selects and protects its chosen route with myelin.
“We think these findings are really exciting as they open up opportunities to investigate the role of OLs and myelin in other brain processes, such as cognitive activities (like navigating through a maze), to see if the requirement for new myelin is general or specific to motor activity. I’m keen to find out the precise sequence of changes to OLs and myelin during learning and whether these changes are needed more in some parts of the brain than others, which might shed light on some of the mysteries still surrounding how the brain adapts and learns throughout life.”

(Image caption: The complex shape of individual oligodendrocytes (OLs) and myelin in adult mice injected with tamoxifen. Credit: Sarah Jolly)

Myelin vital for learning new practical skills

New evidence of myelin’s essential role in learning and retaining new practical skills, such as playing a musical instrument, has been uncovered by UCL research. Myelin is a fatty substance that insulates the brain’s wiring and is a major constituent of ‘white matter’. It is produced by the brain and spinal cord into early adulthood as it is needed for many developmental processes, and although earlier studies of human white matter hinted at its involvement in skill learning, this is the first time it has been confirmed experimentally.

The study in mice, published in Science today, shows that new myelin must be made each time a skill is learned later in life and the structure of the brain’s white matter changes during new practical activities by increasing the number of myelin-producing cells. Furthermore, the team say once a new skill has been learnt, it is retained even after myelin production stops. These discoveries could prove important in finding ways to stimulate and improve learning, and in understanding myelin’s involvement in other brain processes, such as in cognition.

For a child to learn to walk or an adult to master a new skill such as juggling, new brain circuit activity is needed and new connections are made across large distances and at high speeds between different parts of the brain and spinal cord. For this, electrical signals fire between neurons connected by “axons” – thread-like extensions of their outer surfaces which can be viewed as the ‘wire’ in the electric circuit. When new signals fire repeatedly along axons, the connections between the neurons strengthen, making them easier to fire in the same pattern in future. Neighbouring myelin-producing cells called oligodendrocytes (OLs) recognise the repeating signal and wrap myelin around the active circuit wiring. It is this activity-driven insulation that the team identified as essential for learning.

The team demonstrated that young adult mice need to make myelin to learn new motor skills but that new myelin does not need to be produced to recall and perform a pre-learned skill. They tested the ability of mice to learn to run on a complex wheel with irregularly spaced rungs. The study looked at thirty-six normal mice and thirty-two mice with a drug-controlled genetic switch to prevent new OLs and myelin from being made. They found the mice that were prevented from producing new myelin could not master the complex wheel, whereas those that could produce myelin did learn, with differences between the two groups’ abilities seen after only two hours of practice.

A second experiment looked at mice that were first allowed to learn to run on the complex wheel before being treated with the drug to prevent further myelin production. When the mice were later re-introduced to the complex wheel, they were immediately able to run at top speed without having to spend time re-learning. This shows that the inability to make new myelin did not affect the mouse’s running ability and that new myelin is not required to remember and perform a skill once learned; it is required only during the initial learning phase.

Lead researcher, Professor Bill Richardson, Director of the UCL Wolfson Institute for Biomedical Research, said: “From earlier studies of human white matter using advanced MRI technology, we thought OLs and myelin might be involved in some way in skill learning, so we decided to attack this idea experimentally. We were surprised how quickly we saw differences in the ability of mice from each group to learn how to run on complex wheel, which shows just how fast the brain can respond to wrap newly-activated circuits in myelin and how this improves learning. This rapid response suggests that a number of alternative axon pathways might already exist in the brain that could be used to drive a particular sequence of movements, but it quickly works out which of those circuits is most efficient and both selects and protects its chosen route with myelin.

“We think these findings are really exciting as they open up opportunities to investigate the role of OLs and myelin in other brain processes, such as cognitive activities (like navigating through a maze), to see if the requirement for new myelin is general or specific to motor activity. I’m keen to find out the precise sequence of changes to OLs and myelin during learning and whether these changes are needed more in some parts of the brain than others, which might shed light on some of the mysteries still surrounding how the brain adapts and learns throughout life.”

Filed under myelin oligodendrocytes white matter motor activity learning neuroscience science

124 notes

Working memory hinders learning in schizophrenia
A new study pinpoints working memory as a source of learning difficulties in people with schizophrenia.
Working memory is known to be affected in the millions of people — about 1 percent of the population — who have schizophrenia, but it has been unclear whether that has a specific role in making learning more difficult, said Anne Collins, a postdoctoral researcher at Brown University and lead author of the study.
“We really tend to think of learning as a unitary, single process, but really it is not,” said Collins, who in 2012 along with co-author Michael Frank, associate professor of cognitive, linguistic, and psychological sciences, developed an experimental task and a computational model of cognition that can distinguish the contributions of working memory and reinforcement in the learning process. “We thought we could try to disentangle that here and see if the impairment was in both aspects, or only one of them.”
In the new study in the Journal of Neuroscience, cognitive scientists Collins and Frank collaborated with schizophrenia experts James Waltz and James Gold of the University of Maryland to measure the effects of working memory and reinforcement in learning by applying these methods. They found that only working memory was a source of impairment.
Learning about learning’s components
To find that out, they marshaled 49 volunteers with schizophrenia and an otherwise comparable set of 36 people without the condition to participate in the specially designed learning task. In each round, participants were shown a set of images and then were asked to push one of three buttons when they saw each image. With each button push they were told whether they had hit the correct button for that image. Over time, through trial and error, participants could learn which picture called for which button. With perfect memory, one wouldn’t need to see an image more than three times to learn the right button to push when it appeared.
The task explicitly involves employing the brain’s systems for working memory (keeping each image–button association in mind) and for reinforcement learning (wanting to repeat an action that led to the feedback of “correct” and to avoid one that produced “incorrect”). But in different rounds while the degree of reinforcement remained the same, the experimenters varied the number of images in the sets the volunteers saw, from two to six. What varied, therefore, was the degree to which working memory was taxed.
What the researchers found was that for both people with schizophrenia and for controls, the larger the image set size, the more trials it took to learn to press the correct button consistently for each image and the longer it took to react to each stimulus. People with schizophrenia generally performed worse on the task than healthy controls.
Those results show that as the task involved more images, it became harder to do – a matter of working memory, since the capacity to maintain information explicitly in memory is limited – but that alone did not prove that working memory was a source of learning problems for people with schizophrenia. They could also be doing worse because of a slower use of the reinforcement.
To determine that, the researchers used their computational models of how learning occurs in the brain to fit the experimental data. They asked what parameters in the models needed to vary to accurately predict the behavior they measured in people with and without schizophrenia.
That analysis revealed that varying parameters of working memory, such as capacity, but not parameters of reinforcement learning, accounted best for differences in behavior between the groups.
“With model-fitting techniques, I can look quantitatively, trial-by-trial and see that the model predicts subject’s choices,” she said. “The same model explains both the healthy group and the patient group, but with differences in parameters.”
That confirmed that working memory uniquely affected learning in people with schizophrenia, while reinforcement learning mechanisms did not, Collins said.
The study suggests that working memory could be a more important target than reinforcement learning among researchers and clinicians hoping to help improve learning for people with schizophrenia, Collins said.
Among mentally healthy people as well, the study illustrates that the different components of learning can be understood individually, even as they all interact in the brain to make learning happen.
“More broadly, it brings attention to the fact that we need to consider learning as a multiactor kind of behavior that can’t be just summarized by a single system,” Collins said. “It’s important to design tasks that can separate them out so we can extract different sources of variance and correctly match them to different neural systems.”
(Image: Shutterstock)

Working memory hinders learning in schizophrenia

A new study pinpoints working memory as a source of learning difficulties in people with schizophrenia.

Working memory is known to be affected in the millions of people — about 1 percent of the population — who have schizophrenia, but it has been unclear whether that has a specific role in making learning more difficult, said Anne Collins, a postdoctoral researcher at Brown University and lead author of the study.

“We really tend to think of learning as a unitary, single process, but really it is not,” said Collins, who in 2012 along with co-author Michael Frank, associate professor of cognitive, linguistic, and psychological sciences, developed an experimental task and a computational model of cognition that can distinguish the contributions of working memory and reinforcement in the learning process. “We thought we could try to disentangle that here and see if the impairment was in both aspects, or only one of them.”

In the new study in the Journal of Neuroscience, cognitive scientists Collins and Frank collaborated with schizophrenia experts James Waltz and James Gold of the University of Maryland to measure the effects of working memory and reinforcement in learning by applying these methods. They found that only working memory was a source of impairment.

Learning about learning’s components

To find that out, they marshaled 49 volunteers with schizophrenia and an otherwise comparable set of 36 people without the condition to participate in the specially designed learning task. In each round, participants were shown a set of images and then were asked to push one of three buttons when they saw each image. With each button push they were told whether they had hit the correct button for that image. Over time, through trial and error, participants could learn which picture called for which button. With perfect memory, one wouldn’t need to see an image more than three times to learn the right button to push when it appeared.

The task explicitly involves employing the brain’s systems for working memory (keeping each image–button association in mind) and for reinforcement learning (wanting to repeat an action that led to the feedback of “correct” and to avoid one that produced “incorrect”). But in different rounds while the degree of reinforcement remained the same, the experimenters varied the number of images in the sets the volunteers saw, from two to six. What varied, therefore, was the degree to which working memory was taxed.

What the researchers found was that for both people with schizophrenia and for controls, the larger the image set size, the more trials it took to learn to press the correct button consistently for each image and the longer it took to react to each stimulus. People with schizophrenia generally performed worse on the task than healthy controls.

Those results show that as the task involved more images, it became harder to do – a matter of working memory, since the capacity to maintain information explicitly in memory is limited – but that alone did not prove that working memory was a source of learning problems for people with schizophrenia. They could also be doing worse because of a slower use of the reinforcement.

To determine that, the researchers used their computational models of how learning occurs in the brain to fit the experimental data. They asked what parameters in the models needed to vary to accurately predict the behavior they measured in people with and without schizophrenia.

That analysis revealed that varying parameters of working memory, such as capacity, but not parameters of reinforcement learning, accounted best for differences in behavior between the groups.

“With model-fitting techniques, I can look quantitatively, trial-by-trial and see that the model predicts subject’s choices,” she said. “The same model explains both the healthy group and the patient group, but with differences in parameters.”

That confirmed that working memory uniquely affected learning in people with schizophrenia, while reinforcement learning mechanisms did not, Collins said.

The study suggests that working memory could be a more important target than reinforcement learning among researchers and clinicians hoping to help improve learning for people with schizophrenia, Collins said.

Among mentally healthy people as well, the study illustrates that the different components of learning can be understood individually, even as they all interact in the brain to make learning happen.

“More broadly, it brings attention to the fact that we need to consider learning as a multiactor kind of behavior that can’t be just summarized by a single system,” Collins said. “It’s important to design tasks that can separate them out so we can extract different sources of variance and correctly match them to different neural systems.”

(Image: Shutterstock)

Filed under schizophrenia working memory learning reinforcement learning neuroscience science

394 notes

How curiosity changes the brain to enhance learning
The more curious we are about a topic, the easier it is to learn information about that topic. New research publishing online October 2 in the Cell Press journal Neuron provides insights into what happens in our brains when curiosity is piqued. The findings could help scientists find ways to enhance overall learning and memory in both healthy individuals and those with neurological conditions.
"Our findings potentially have far-reaching implications for the public because they reveal insights into how a form of intrinsic motivation—curiosity—affects memory. These findings suggest ways to enhance learning in the classroom and other settings," says lead author Dr. Matthias Gruber, of University of California at Davis.
For the study, participants rated their curiosity to learn the answers to a series of trivia questions. When they were later presented with a selected trivia question, there was a 14 second delay before the answer was provided, during which time the participants were shown a picture of a neutral, unrelated face. Afterwards, participants performed a surprise recognition memory test for the faces that were presented, followed by a memory test for the answers to the trivia questions. During certain parts of the study, participants had their brains scanned via functional magnetic resonance imaging.
The study revealed three major findings. First, as expected, when people were highly curious to find out the answer to a question, they were better at learning that information. More surprising, however, was that once their curiosity was aroused, they showed better learning of entirely unrelated information (face recognition) that they encountered but were not necessarily curious about. People were also better able to retain the information learned during a curious state across a 24-hour delay. “Curiosity may put the brain in a state that allows it to learn and retain any kind of information, like a vortex that sucks in what you are motivated to learn, and also everything around it,” explains Dr. Gruber.
Second, the investigators found that when curiosity is stimulated, there is increased activity in the brain circuit related to reward. “We showed that intrinsic motivation actually recruits the very same brain areas that are heavily involved in tangible, extrinsic motivation,” says Dr. Gruber. This reward circuit relies on dopamine, a chemical messenger that relays messages between neurons.
Third, the team discovered that when curiosity motivated learning, there was increased activity in the hippocampus, a brain region that is important for forming new memories, as well as increased interactions between the hippocampus and the reward circuit. “So curiosity recruits the reward system, and interactions between the reward system and the hippocampus seem to put the brain in a state in which you are more likely to learn and retain information, even if that information is not of particular interest or importance,” explains principal investigator Dr. Charan Ranganath, also of UC Davis.
The findings could have implications for medicine and beyond. For example, the brain circuits that rely on dopamine tend to decline in function as people get older, or sooner in people with neurological conditions. Understanding the relationship between motivation and memory could therefore stimulate new efforts to improve memory in the healthy elderly and to develop new approaches for treating patients with disorders that affect memory. And in the classroom or workplace, learning what might be considered boring material could be enhanced if teachers or managers are able to harness the power of students’ and workers’ curiosity about something they are naturally motivated to learn.

How curiosity changes the brain to enhance learning

The more curious we are about a topic, the easier it is to learn information about that topic. New research publishing online October 2 in the Cell Press journal Neuron provides insights into what happens in our brains when curiosity is piqued. The findings could help scientists find ways to enhance overall learning and memory in both healthy individuals and those with neurological conditions.

"Our findings potentially have far-reaching implications for the public because they reveal insights into how a form of intrinsic motivation—curiosity—affects memory. These findings suggest ways to enhance learning in the classroom and other settings," says lead author Dr. Matthias Gruber, of University of California at Davis.

For the study, participants rated their curiosity to learn the answers to a series of trivia questions. When they were later presented with a selected trivia question, there was a 14 second delay before the answer was provided, during which time the participants were shown a picture of a neutral, unrelated face. Afterwards, participants performed a surprise recognition memory test for the faces that were presented, followed by a memory test for the answers to the trivia questions. During certain parts of the study, participants had their brains scanned via functional magnetic resonance imaging.

The study revealed three major findings. First, as expected, when people were highly curious to find out the answer to a question, they were better at learning that information. More surprising, however, was that once their curiosity was aroused, they showed better learning of entirely unrelated information (face recognition) that they encountered but were not necessarily curious about. People were also better able to retain the information learned during a curious state across a 24-hour delay. “Curiosity may put the brain in a state that allows it to learn and retain any kind of information, like a vortex that sucks in what you are motivated to learn, and also everything around it,” explains Dr. Gruber.

Second, the investigators found that when curiosity is stimulated, there is increased activity in the brain circuit related to reward. “We showed that intrinsic motivation actually recruits the very same brain areas that are heavily involved in tangible, extrinsic motivation,” says Dr. Gruber. This reward circuit relies on dopamine, a chemical messenger that relays messages between neurons.

Third, the team discovered that when curiosity motivated learning, there was increased activity in the hippocampus, a brain region that is important for forming new memories, as well as increased interactions between the hippocampus and the reward circuit. “So curiosity recruits the reward system, and interactions between the reward system and the hippocampus seem to put the brain in a state in which you are more likely to learn and retain information, even if that information is not of particular interest or importance,” explains principal investigator Dr. Charan Ranganath, also of UC Davis.

The findings could have implications for medicine and beyond. For example, the brain circuits that rely on dopamine tend to decline in function as people get older, or sooner in people with neurological conditions. Understanding the relationship between motivation and memory could therefore stimulate new efforts to improve memory in the healthy elderly and to develop new approaches for treating patients with disorders that affect memory. And in the classroom or workplace, learning what might be considered boring material could be enhanced if teachers or managers are able to harness the power of students’ and workers’ curiosity about something they are naturally motivated to learn.

Filed under curiosity hippocampus memory learning nucleus accumbens midbrain neuroscience science

3,723 notes

Why Wet Feels Wet: Understanding the Illusion of Wetness
Human sensitivity to wetness plays a role in many aspects of daily life. Whether feeling humidity, sweat or a damp towel, we often encounter stimuli that feel wet. Though it seems simple, feeling that something is wet is quite a feat because our skin does not have receptors that sense wetness. The concept of wetness, in fact, may be more of a “perceptual illusion” that our brain evokes based on our prior experiences with stimuli that we have learned are wet.
So how would a person know if he has sat on a wet seat or walked through a puddle? Researchers at Loughborough University and Oxylane Research proposed that wetness perception is intertwined with our ability to sense cold temperature and tactile sensations such as pressure and texture. They also observed the role of A-nerve fibers—sensory nerves that carry temperature and tactile information from the skin to the brain—and the effect of reduced nerve activity on wetness perception. Lastly, they hypothesized that because hairy skin is more sensitive to thermal stimuli, it would be more perceptive to wetness than glabrous skin (e.g., palms of the hands, soles of the feet), which is more sensitive to tactile stimuli.
Davide Filingeri et al. exposed 13 healthy male college students to warm, neutral and cold wet stimuli. They tested sites on the subjects’ forearms (hairy skin) and fingertips (glabrous skin). The researchers also performed the wet stimulus test with and without a nerve block. The nerve block was achieved by using an inflatable compression (blood pressure) cuff to attain enough pressure to dampen A-nerve sensitivity.
They found that wet perception increased as temperature decreased, meaning subjects were much more likely to sense cold wet stimuli than warm or neutral wet stimuli. The research team also found that the subjects were less sensitive to wetness when the A-nerve activity was blocked and that hairy skin is more sensitive to wetness than glabrous skin. These results contribute to the understanding of how humans interpret wetness and present a new model for how the brain processes this sensation.
“Based on a concept of perceptual learning and Bayesian perceptual inference, we developed the first neurophysiological model of cutaneous wetness sensitivity centered on the multisensory integration of cold-sensitive and mechanosensitive skin afferents,” the research team wrote. “Our results provide evidence for the existence of a specific information processing model that underpins the neural representation of a typical wet stimulus.”
The article “Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity” is published in the Journal of Neurophysiology.
(Image credit)

Why Wet Feels Wet: Understanding the Illusion of Wetness

Human sensitivity to wetness plays a role in many aspects of daily life. Whether feeling humidity, sweat or a damp towel, we often encounter stimuli that feel wet. Though it seems simple, feeling that something is wet is quite a feat because our skin does not have receptors that sense wetness. The concept of wetness, in fact, may be more of a “perceptual illusion” that our brain evokes based on our prior experiences with stimuli that we have learned are wet.

So how would a person know if he has sat on a wet seat or walked through a puddle? Researchers at Loughborough University and Oxylane Research proposed that wetness perception is intertwined with our ability to sense cold temperature and tactile sensations such as pressure and texture. They also observed the role of A-nerve fibers—sensory nerves that carry temperature and tactile information from the skin to the brain—and the effect of reduced nerve activity on wetness perception. Lastly, they hypothesized that because hairy skin is more sensitive to thermal stimuli, it would be more perceptive to wetness than glabrous skin (e.g., palms of the hands, soles of the feet), which is more sensitive to tactile stimuli.

Davide Filingeri et al. exposed 13 healthy male college students to warm, neutral and cold wet stimuli. They tested sites on the subjects’ forearms (hairy skin) and fingertips (glabrous skin). The researchers also performed the wet stimulus test with and without a nerve block. The nerve block was achieved by using an inflatable compression (blood pressure) cuff to attain enough pressure to dampen A-nerve sensitivity.

They found that wet perception increased as temperature decreased, meaning subjects were much more likely to sense cold wet stimuli than warm or neutral wet stimuli. The research team also found that the subjects were less sensitive to wetness when the A-nerve activity was blocked and that hairy skin is more sensitive to wetness than glabrous skin. These results contribute to the understanding of how humans interpret wetness and present a new model for how the brain processes this sensation.

“Based on a concept of perceptual learning and Bayesian perceptual inference, we developed the first neurophysiological model of cutaneous wetness sensitivity centered on the multisensory integration of cold-sensitive and mechanosensitive skin afferents,” the research team wrote. “Our results provide evidence for the existence of a specific information processing model that underpins the neural representation of a typical wet stimulus.”

The article “Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity” is published in the Journal of Neurophysiology.

(Image credit)

Filed under wetness sensitivity nerve fibers perception learning perceptual inference neuroscience science

219 notes

Vitamin D in diet might ease effects of age on memory
If you don’t want to dumb down with age, vitamin D may be the meal ticket.
A boosted daily dosage of the vitamin over several months helped middle-aged rats navigate a difficult water maze better than their lower-dosed cohorts, according to a study published online Monday in the journal Proceedings of the National Academy of Sciences.
The supplement appears to boost the machinery that helps recycle and repackage signaling chemicals that help neurons communicate with one another in a part of the brain that is central to memory and learning.
"This process is like restocking shelves in grocery stores," said study co-author Nada Porter, a biomedical pharmacologist at the University of Kentucky College of Medicine.
Read more

Vitamin D in diet might ease effects of age on memory

If you don’t want to dumb down with age, vitamin D may be the meal ticket.

A boosted daily dosage of the vitamin over several months helped middle-aged rats navigate a difficult water maze better than their lower-dosed cohorts, according to a study published online Monday in the journal Proceedings of the National Academy of Sciences.

The supplement appears to boost the machinery that helps recycle and repackage signaling chemicals that help neurons communicate with one another in a part of the brain that is central to memory and learning.

"This process is like restocking shelves in grocery stores," said study co-author Nada Porter, a biomedical pharmacologist at the University of Kentucky College of Medicine.

Read more

Filed under vitamin d memory learning cognitive decline cognitive function neuroscience science

135 notes

New learning mechanism for individual nerve cells


The traditional view is that learning is based on the strengthening or weakening of the contacts between the nerve cells in the brain. However, this has been challenged by new research findings from Lund University in Sweden. These indicate that there is also a third mechanism – a kind of clock function that gives individual nerve cells the ability to time their reactions.


“This means a dramatic increase in the brain’s learning capacity. The cells we have studied control the blink reflex, but there are many cells of the same type that control entirely different processes. It is therefore likely that the timing mechanism we have discovered also exists in other parts of the brain”, said Professor of neurophysiology Germund Hesslow.
Professor Hesslow and colleagues Fredrik Johansson and Dan-Anders Jirenhed have used ‘conditioned reflexes’ for the research. The principle comes from the Russian researcher Ivan Pavlov, who, around the turn of the last century, taught dogs to associate a certain sound with food so that they began to drool on hearing the sound.
In the present experiment, the researchers studied animals that learnt to associate a sound with a puff of air in the eye that caused them to blink. If the time between the sound and the puff of air was quarter of a second, the animals blinked after quarter of a second even if the puff of air was removed. If the time was changed to half a second, the animals blinked after half a second, and so on.
The prevalent theories in brain research state that this learnt timing mechanism is a result of strengthening or weakening of the contacts – or synapses – throughout a network of nerve cells. However, using super-thin electrodes, the Lund group have now shown that no networks are needed: one single cell can learn when it is time to react.
The cells which the researchers have studied are called Purkinje cells and are located in the cerebellum. The cerebellum is the part of the brain responsible for posture, balance and movement, and the researchers focused on those cells that control blinking.
This work is basic research, but possible future applications could include rehabilitation following a stroke, which often affects a patient’s movements. The findings could also have a bearing on conditions such as autism, ADHD and language problems, in which the cerebellum is believed to play a part.
“Intelligible speech is dependent on correct timing, so that the pauses between the sounds are right”, explained Germund Hesslow.
The new findings have already attracted attention in the research community: the internationally renowned memory researcher Charles Gallistel came all the way from Rutgers University in the spring to study the group’s work. Work is now continuing to study what transmitter substance and what receptor on the surface of the cell are responsible for the newly discovered timing mechanism.

New learning mechanism for individual nerve cells

The traditional view is that learning is based on the strengthening or weakening of the contacts between the nerve cells in the brain. However, this has been challenged by new research findings from Lund University in Sweden. These indicate that there is also a third mechanism – a kind of clock function that gives individual nerve cells the ability to time their reactions.

“This means a dramatic increase in the brain’s learning capacity. The cells we have studied control the blink reflex, but there are many cells of the same type that control entirely different processes. It is therefore likely that the timing mechanism we have discovered also exists in other parts of the brain”, said Professor of neurophysiology Germund Hesslow.

Professor Hesslow and colleagues Fredrik Johansson and Dan-Anders Jirenhed have used ‘conditioned reflexes’ for the research. The principle comes from the Russian researcher Ivan Pavlov, who, around the turn of the last century, taught dogs to associate a certain sound with food so that they began to drool on hearing the sound.

In the present experiment, the researchers studied animals that learnt to associate a sound with a puff of air in the eye that caused them to blink. If the time between the sound and the puff of air was quarter of a second, the animals blinked after quarter of a second even if the puff of air was removed. If the time was changed to half a second, the animals blinked after half a second, and so on.

The prevalent theories in brain research state that this learnt timing mechanism is a result of strengthening or weakening of the contacts – or synapses – throughout a network of nerve cells. However, using super-thin electrodes, the Lund group have now shown that no networks are needed: one single cell can learn when it is time to react.

The cells which the researchers have studied are called Purkinje cells and are located in the cerebellum. The cerebellum is the part of the brain responsible for posture, balance and movement, and the researchers focused on those cells that control blinking.

This work is basic research, but possible future applications could include rehabilitation following a stroke, which often affects a patient’s movements. The findings could also have a bearing on conditions such as autism, ADHD and language problems, in which the cerebellum is believed to play a part.

“Intelligible speech is dependent on correct timing, so that the pauses between the sounds are right”, explained Germund Hesslow.

The new findings have already attracted attention in the research community: the internationally renowned memory researcher Charles Gallistel came all the way from Rutgers University in the spring to study the group’s work. Work is now continuing to study what transmitter substance and what receptor on the surface of the cell are responsible for the newly discovered timing mechanism.

Filed under nerve cells cerebellum purkinje cells learning neural activity neuroscience science

190 notes

Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.
Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.
The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.
“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.
Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.
All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.
In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.
Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.
The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.
Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.
The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.
Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.
This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Neuroscientists identify key role of language gene

Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

Filed under Foxp2 gene mutation language language acquisition speech learning neuroscience science

free counters