Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

217 notes

In the brain, timing is everything
Suppose you heard the sound of skidding tires, followed by a car crash. The next time you heard such a skid, you might cringe in fear, expecting a crash to follow — suggesting that somehow, your brain had linked those two memories so that a fairly innocuous sound provokes dread.
MIT neuroscientists have now discovered how two neural circuits in the brain work together to control the formation of such time-linked memories. This is a critical ability that helps the brain to determine when it needs to take action to defend against a potential threat, says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the Jan. 23 issue of Science.
“It’s important for us to be able to associate things that happen with some temporal gap,” says Tonegawa, who is a member of MIT’s Picower Institute for Learning and Memory. “For animals it is very useful to know what events they should associate, and what not to associate.”
The interaction of these two circuits allows the brain to maintain a balance between becoming too easily paralyzed with fear and being too careless, which could result in being caught off guard by a predator or other threat.
The paper’s lead authors are Picower Institute postdocs Takashi Kitamura and Michele Pignatelli.
Linking memories
Memories of events, known as episodic memories, always contain three elements — what, where, and when. Those memories are created in a brain structure called the hippocampus, which must coordinate each of these three elements.
To form episodic memories, the hippocampus also communicates with the region of the cerebral cortex just outside the hippocampus, known as the entorhinal cortex. The entorhinal cortex, which has several layers, receives sensory information, such as sights and sounds, from sensory processing areas of the brain and sends the information on to the hippocampus.
Previous research has revealed a great deal about how the brain links the place and object components of memory. Certain neurons in the hippocampus, known as place cells, are specialized to fire when an animal is in a specific location, and also when the animal is remembering that location. However, when it comes to associating objects and time, “our understanding has fallen behind,” Tonegawa says. “Something is known, but relatively little compared to the object-place mechanism.”
The new Science paper builds on a 2011 study from Tonegawa’s lab in which he identified a brain circuit necessary for mice to link memories of two events — a tone and a mild electric shock — that occur up to 20 seconds apart. This circuit connects layer 3 of the entorhinal cortex to the CA1 region of the hippocampus. When that circuit, known as the monosynaptic circuit, was disrupted, the animals did not learn to fear the tone.
In the new paper, the researchers report the discovery of a previously unknown circuit that suppresses the monosynaptic circuit. This signal originates in a type of excitatory neurons discovered in Tonegawa’s lab, dubbed “island cells” because they form circular clusters within layer 2. Those cells stimulate inhibitory neurons in CA1 that suppress the set of excitatory CA1 neurons that are activated by the monosynaptic circuit.
This circuit creates a counterbalance that limits the window of opportunity for two events to become linked. “This pathway might provide a mechanism for preventing constant learning of unimportant temporal associations,” says Michael Hasselmo, a professor of psychology at Boston University who was not part of the research team.
The findings are “an important demonstration of the functional role of different populations of neurons in entorhinal cortex that provide input to the hippocampus,” Hasselmo adds.
Deciphering circuits
The researchers used optogenetics, a technology that allows specific populations of neurons to be turned on or off with light, to demonstrate the interplay of these two circuits.
In normal mice, the maximum time gap between events that can be linked is about 20 seconds, but the researchers could lengthen that period by either boosting activity of layer 3 cells or suppressing layer 2 island cells. Conversely, they could shorten the window of opportunity by inhibiting layer 3 cells or stimulating input from layer 2 island cells, which both result in turning down CA1 activity.
The researchers hypothesize that prolonged CA1 activity keeps the memory of the tone alive long enough so that it is still present when the shock takes place, allowing the two memories to be linked. They are now investigating whether CA1 neurons remain active throughout the entire gap between events.

In the brain, timing is everything

Suppose you heard the sound of skidding tires, followed by a car crash. The next time you heard such a skid, you might cringe in fear, expecting a crash to follow — suggesting that somehow, your brain had linked those two memories so that a fairly innocuous sound provokes dread.

MIT neuroscientists have now discovered how two neural circuits in the brain work together to control the formation of such time-linked memories. This is a critical ability that helps the brain to determine when it needs to take action to defend against a potential threat, says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the Jan. 23 issue of Science.

“It’s important for us to be able to associate things that happen with some temporal gap,” says Tonegawa, who is a member of MIT’s Picower Institute for Learning and Memory. “For animals it is very useful to know what events they should associate, and what not to associate.”

The interaction of these two circuits allows the brain to maintain a balance between becoming too easily paralyzed with fear and being too careless, which could result in being caught off guard by a predator or other threat.

The paper’s lead authors are Picower Institute postdocs Takashi Kitamura and Michele Pignatelli.

Linking memories

Memories of events, known as episodic memories, always contain three elements — what, where, and when. Those memories are created in a brain structure called the hippocampus, which must coordinate each of these three elements.

To form episodic memories, the hippocampus also communicates with the region of the cerebral cortex just outside the hippocampus, known as the entorhinal cortex. The entorhinal cortex, which has several layers, receives sensory information, such as sights and sounds, from sensory processing areas of the brain and sends the information on to the hippocampus.

Previous research has revealed a great deal about how the brain links the place and object components of memory. Certain neurons in the hippocampus, known as place cells, are specialized to fire when an animal is in a specific location, and also when the animal is remembering that location. However, when it comes to associating objects and time, “our understanding has fallen behind,” Tonegawa says. “Something is known, but relatively little compared to the object-place mechanism.”

The new Science paper builds on a 2011 study from Tonegawa’s lab in which he identified a brain circuit necessary for mice to link memories of two events — a tone and a mild electric shock — that occur up to 20 seconds apart. This circuit connects layer 3 of the entorhinal cortex to the CA1 region of the hippocampus. When that circuit, known as the monosynaptic circuit, was disrupted, the animals did not learn to fear the tone.

In the new paper, the researchers report the discovery of a previously unknown circuit that suppresses the monosynaptic circuit. This signal originates in a type of excitatory neurons discovered in Tonegawa’s lab, dubbed “island cells” because they form circular clusters within layer 2. Those cells stimulate inhibitory neurons in CA1 that suppress the set of excitatory CA1 neurons that are activated by the monosynaptic circuit.

This circuit creates a counterbalance that limits the window of opportunity for two events to become linked. “This pathway might provide a mechanism for preventing constant learning of unimportant temporal associations,” says Michael Hasselmo, a professor of psychology at Boston University who was not part of the research team.

The findings are “an important demonstration of the functional role of different populations of neurons in entorhinal cortex that provide input to the hippocampus,” Hasselmo adds.

Deciphering circuits

The researchers used optogenetics, a technology that allows specific populations of neurons to be turned on or off with light, to demonstrate the interplay of these two circuits.

In normal mice, the maximum time gap between events that can be linked is about 20 seconds, but the researchers could lengthen that period by either boosting activity of layer 3 cells or suppressing layer 2 island cells. Conversely, they could shorten the window of opportunity by inhibiting layer 3 cells or stimulating input from layer 2 island cells, which both result in turning down CA1 activity.

The researchers hypothesize that prolonged CA1 activity keeps the memory of the tone alive long enough so that it is still present when the shock takes place, allowing the two memories to be linked. They are now investigating whether CA1 neurons remain active throughout the entire gap between events.

Filed under episodic memory hippocampus entorhinal cortex place cells neuroscience science

347 notes

Watching Molecules Morph into Memories
In two studies in the January 24 issue of Science (1, 2), researchers at Albert Einstein College of Medicine of Yeshiva University used advanced imaging techniques to provide a window into how the brain makes memories. These insights into the molecular basis of memory were made possible by a technological tour de force never before achieved in animals: a mouse model developed at Einstein in which molecules crucial to making memories were given fluorescent “tags” so they could be observed traveling in real time in living brain cells.
Efforts to discover how neurons make memories have long confronted a major roadblock: Neurons are extremely sensitive to any kind of disruption, yet only by probing their innermost workings can scientists view the molecular processes that culminate in memories. To peer deep into neurons without harming them, Einstein researchers developed a mouse model in which they fluorescently tagged all molecules of messenger RNA (mRNA) that code for beta-actin protein – an essential structural protein found in large amounts in brain neurons and considered a key player in making memories. mRNA is a family of RNA molecules that copy DNA’s genetic information and translate it into the proteins that make life possible.
"It’s noteworthy that we were able to develop this mouse without having to use an artificial gene or other interventions that might have disrupted neurons and called our findings into question," said Robert Singer, Ph.D., the senior author of both papers and professor and co-chair of Einstein’s department of anatomy & structural biology and co-director of the Gruss Lipper Biophotonics Center at Einstein. He also holds the Harold and Muriel Block Chair in Anatomy & Structural Biology at Einstein.
In the research described in the two Science papers, the Einstein researchers stimulated neurons from the mouse’s hippocampus, where memories are made and stored, and then watched fluorescently glowing beta-actin mRNA molecules form in the nuclei of neurons and travel within dendrites, the neuron’s branched projections. They discovered that mRNA in neurons is regulated through a novel process described as “masking” and “unmasking,” which allows beta-actin protein to be synthesized at specific times and places and in specific amounts.
"We know the beta-actin mRNA we observed in these two papers was ‘normal’ RNA, transcribed from the mouse’s naturally occurring beta-actin gene," said Dr. Singer. "And attaching green fluorescent protein to mRNA molecules did not affect the mice, which were healthy and able to reproduce."
Neurons come together at synapses, where slender dendritic “spines” of neurons grasp each other, much as the fingers of one hand bind those of the other. Evidence indicates that repeated neural stimulation increases the strength of synaptic connections by changing the shape of these interlocking dendrite “fingers.” Beta-actin protein appears to strengthen these synaptic connections by altering the shape of dendritic spines. Memories are thought to be encoded when stable, long-lasting synaptic connections form between neurons in contact with each other.
The first paper describes the work of Hye Yoon Park, Ph.D., a postdoctoral student in Dr. Singer’s lab at the time and now an instructor at Einstein. Her research was instrumental in developing the mice containing fluorescent beta-actin mRNA—a process that took about three years.
Dr. Park stimulated individual hippocampal neurons of the mouse and observed newly formed beta-actin mRNA molecules within 10 to 15 minutes, indicating that nerve stimulation had caused rapid transcription of the beta-actin gene. Further observations suggested that these beta-actin mRNA molecules continuously assemble and disassemble into large and small particles, respectively. These mRNA particles were seen traveling to their destinations in dendrites where beta-actin protein would be synthesized.
In the second paper, lead author and graduate student Adina Buxbaum of Dr. Singer’s lab showed that neurons may be unique among cells in how they control the synthesis of beta-actin protein.
"Having a long, attenuated structure means that neurons face a logistical problem," said Dr. Singer. "Their beta-actin mRNA molecules must travel throughout the cell, but neurons need to control their mRNA so that it makes beta-actin protein only in certain regions at the base of dendritic spines."
Ms. Buxbaum’s research revealed the novel mechanism by which brain neurons handle this challenge. She found that as soon as beta-actin mRNA molecules form in the nucleus of hippocampal neurons and travel out to the cytoplasm, the mRNAs are packaged into granules and so become inaccessible for making protein. She then saw that stimulating the neuron caused these granules to fall apart, so that mRNA molecules became unmasked and available for synthesizing beta-actin protein.
But that observation raised a question: How do neurons prevent these newly liberated mRNAs from making more beta-actin protein than is desirable? “Ms. Buxbaum made the remarkable observation that mRNA’s availability in neurons is a transient phenomenon,” said Dr. Singer. “She saw that after the mRNA molecules make beta-actin protein for just a few minutes, they suddenly repackage and once again become masked. In other words, the default condition for mRNA in neurons is to be packaged and inaccessible.”
These findings suggest that neurons have developed an ingenious strategy for controlling how memory-making proteins do their job. “This observation that neurons selectively activate protein synthesis and then shut it off fits perfectly with how we think memories are made,” said Dr. Singer. “Frequent stimulation of the neuron would make mRNA available in frequent, controlled bursts, causing beta-actin protein to accumulate precisely where it’s needed to strengthen the synapse.”
To gain further insight into memory’s molecular basis, the Singer lab is developing technologies for imaging neurons in the intact brains of living mice in collaboration with another Einstein faculty member in the same department, Vladislav Verkhusha, Ph.D. Since the hippocampus resides deep in the brain, they hope to develop infrared fluorescent proteins that emit light that can pass through tissue. Another possibility is a fiberoptic device that can be inserted into the brain to observe memory-making hippocampal neurons.

Watching Molecules Morph into Memories

In two studies in the January 24 issue of Science (1, 2), researchers at Albert Einstein College of Medicine of Yeshiva University used advanced imaging techniques to provide a window into how the brain makes memories. These insights into the molecular basis of memory were made possible by a technological tour de force never before achieved in animals: a mouse model developed at Einstein in which molecules crucial to making memories were given fluorescent “tags” so they could be observed traveling in real time in living brain cells.

Efforts to discover how neurons make memories have long confronted a major roadblock: Neurons are extremely sensitive to any kind of disruption, yet only by probing their innermost workings can scientists view the molecular processes that culminate in memories. To peer deep into neurons without harming them, Einstein researchers developed a mouse model in which they fluorescently tagged all molecules of messenger RNA (mRNA) that code for beta-actin protein – an essential structural protein found in large amounts in brain neurons and considered a key player in making memories. mRNA is a family of RNA molecules that copy DNA’s genetic information and translate it into the proteins that make life possible.

"It’s noteworthy that we were able to develop this mouse without having to use an artificial gene or other interventions that might have disrupted neurons and called our findings into question," said Robert Singer, Ph.D., the senior author of both papers and professor and co-chair of Einstein’s department of anatomy & structural biology and co-director of the Gruss Lipper Biophotonics Center at Einstein. He also holds the Harold and Muriel Block Chair in Anatomy & Structural Biology at Einstein.

In the research described in the two Science papers, the Einstein researchers stimulated neurons from the mouse’s hippocampus, where memories are made and stored, and then watched fluorescently glowing beta-actin mRNA molecules form in the nuclei of neurons and travel within dendrites, the neuron’s branched projections. They discovered that mRNA in neurons is regulated through a novel process described as “masking” and “unmasking,” which allows beta-actin protein to be synthesized at specific times and places and in specific amounts.

"We know the beta-actin mRNA we observed in these two papers was ‘normal’ RNA, transcribed from the mouse’s naturally occurring beta-actin gene," said Dr. Singer. "And attaching green fluorescent protein to mRNA molecules did not affect the mice, which were healthy and able to reproduce."

Neurons come together at synapses, where slender dendritic “spines” of neurons grasp each other, much as the fingers of one hand bind those of the other. Evidence indicates that repeated neural stimulation increases the strength of synaptic connections by changing the shape of these interlocking dendrite “fingers.” Beta-actin protein appears to strengthen these synaptic connections by altering the shape of dendritic spines. Memories are thought to be encoded when stable, long-lasting synaptic connections form between neurons in contact with each other.

The first paper describes the work of Hye Yoon Park, Ph.D., a postdoctoral student in Dr. Singer’s lab at the time and now an instructor at Einstein. Her research was instrumental in developing the mice containing fluorescent beta-actin mRNA—a process that took about three years.

Dr. Park stimulated individual hippocampal neurons of the mouse and observed newly formed beta-actin mRNA molecules within 10 to 15 minutes, indicating that nerve stimulation had caused rapid transcription of the beta-actin gene. Further observations suggested that these beta-actin mRNA molecules continuously assemble and disassemble into large and small particles, respectively. These mRNA particles were seen traveling to their destinations in dendrites where beta-actin protein would be synthesized.

In the second paper, lead author and graduate student Adina Buxbaum of Dr. Singer’s lab showed that neurons may be unique among cells in how they control the synthesis of beta-actin protein.

"Having a long, attenuated structure means that neurons face a logistical problem," said Dr. Singer. "Their beta-actin mRNA molecules must travel throughout the cell, but neurons need to control their mRNA so that it makes beta-actin protein only in certain regions at the base of dendritic spines."

Ms. Buxbaum’s research revealed the novel mechanism by which brain neurons handle this challenge. She found that as soon as beta-actin mRNA molecules form in the nucleus of hippocampal neurons and travel out to the cytoplasm, the mRNAs are packaged into granules and so become inaccessible for making protein. She then saw that stimulating the neuron caused these granules to fall apart, so that mRNA molecules became unmasked and available for synthesizing beta-actin protein.

But that observation raised a question: How do neurons prevent these newly liberated mRNAs from making more beta-actin protein than is desirable? “Ms. Buxbaum made the remarkable observation that mRNA’s availability in neurons is a transient phenomenon,” said Dr. Singer. “She saw that after the mRNA molecules make beta-actin protein for just a few minutes, they suddenly repackage and once again become masked. In other words, the default condition for mRNA in neurons is to be packaged and inaccessible.”

These findings suggest that neurons have developed an ingenious strategy for controlling how memory-making proteins do their job. “This observation that neurons selectively activate protein synthesis and then shut it off fits perfectly with how we think memories are made,” said Dr. Singer. “Frequent stimulation of the neuron would make mRNA available in frequent, controlled bursts, causing beta-actin protein to accumulate precisely where it’s needed to strengthen the synapse.”

To gain further insight into memory’s molecular basis, the Singer lab is developing technologies for imaging neurons in the intact brains of living mice in collaboration with another Einstein faculty member in the same department, Vladislav Verkhusha, Ph.D. Since the hippocampus resides deep in the brain, they hope to develop infrared fluorescent proteins that emit light that can pass through tissue. Another possibility is a fiberoptic device that can be inserted into the brain to observe memory-making hippocampal neurons.

Filed under hippocampus animal model neuroimaging beta-actin neurons synapses memory neuroscience science

144 notes

Unprecedented structural insights reveal how NMDA receptors can be blocked, to limit neurotoxicity
Structural biologists at Cold Spring Harbor Laboratory (CSHL) and collaborators at Emory University have obtained important scientific results likely to advance efforts to develop new drugs targeting NMDA receptors in the brain. 
NMDA (N-methyl D-aspartate) receptors are found on the surface of many nerve cells and are involved in signaling that is essential in basic brain functions including learning and memory formation. Problems with their function have been implicated in depression, schizophrenia, Alzheimer’s and Parkinson’s diseases, as well as brain damage caused by stroke.
Normally, NMDA receptors are activated by glutamate, the most common neurotransmitter of excitatory cell-to-cell messages in the brain.
Overactivation of NMDA receptors is a known cause of nerve-cell toxicity. Thus, drug developers have long sought compounds that can selectively block or antagonize NMDA receptors, while not affecting other types of glutamate receptors in the brain, whose function is essential.However, a basic question — how those compounds bind and antagonize NMDA receptors — has not been understood at the molecular level.
Over a period of years, CSHL Associate Professor Hiro Furukawa and colleagues have taken a step-by-step approach to learn about the precise shape of various subunits of the complex NMDA receptor protein, and demonstrating the relationship between different versions of the receptor’s shape and its function. (see more here) Since the subunits have different biological roles, they have to be specifically targeted by drug compounds to obtain specific effects. 
Furukawa’s team has used a technique called x-ray crystallography to map various domains of the protein while it is bound to different chemical compounds, or antagonists, that downregulate its function. Today in the journal Neuron they publish the first crystal structures of two NMDA receptor subunits (called GluN1 and GluN2A) in complex with four different compounds known to have the capacity to inhibit, or antagonize, NMDA receptor function. 
Showing this two-unit ligand binding domain (LBD) in complex with NMDA antagonists —  potential drugs — reveals that each antagonist has a distinctive mode of binding the LBD. In essence, the “docking port” is held open, but to a different extent when different antagonists are bound. The study also reveals an element in the antagonist binding site that is only present in GluN2A subunit, but not in the others. This previously hidden information, says Furukawa, is critical: “It indicates different strategies to develop therapeutic compounds – ones that bind to a certain type of NMDA receptors very specifically.  Being able to target specific subtypes of the receptor is of enormous interest and has great therapeutic potential in a range of illnesses and injuries affecting the brain.”  

Unprecedented structural insights reveal how NMDA receptors can be blocked, to limit neurotoxicity

Structural biologists at Cold Spring Harbor Laboratory (CSHL) and collaborators at Emory University have obtained important scientific results likely to advance efforts to develop new drugs targeting NMDA receptors in the brain. 

NMDA (N-methyl D-aspartate) receptors are found on the surface of many nerve cells and are involved in signaling that is essential in basic brain functions including learning and memory formation. Problems with their function have been implicated in depression, schizophrenia, Alzheimer’s and Parkinson’s diseases, as well as brain damage caused by stroke.

Normally, NMDA receptors are activated by glutamate, the most common neurotransmitter of excitatory cell-to-cell messages in the brain.

Overactivation of NMDA receptors is a known cause of nerve-cell toxicity. Thus, drug developers have long sought compounds that can selectively block or antagonize NMDA receptors, while not affecting other types of glutamate receptors in the brain, whose function is essential.
However, a basic question — how those compounds bind and antagonize NMDA receptors — has not been understood at the molecular level.

Over a period of years, CSHL Associate Professor Hiro Furukawa and colleagues have taken a step-by-step approach to learn about the precise shape of various subunits of the complex NMDA receptor protein, and demonstrating the relationship between different versions of the receptor’s shape and its function. (see more here) Since the subunits have different biological roles, they have to be specifically targeted by drug compounds to obtain specific effects. 

Furukawa’s team has used a technique called x-ray crystallography to map various domains of the protein while it is bound to different chemical compounds, or antagonists, that downregulate its function. Today in the journal Neuron they publish the first crystal structures of two NMDA receptor subunits (called GluN1 and GluN2A) in complex with four different compounds known to have the capacity to inhibit, or antagonize, NMDA receptor function. 

Showing this two-unit ligand binding domain (LBD) in complex with NMDA antagonists —  potential drugs — reveals that each antagonist has a distinctive mode of binding the LBD. In essence, the “docking port” is held open, but to a different extent when different antagonists are bound. The study also reveals an element in the antagonist binding site that is only present in GluN2A subunit, but not in the others. This previously hidden information, says Furukawa, is critical: “It indicates different strategies to develop therapeutic compounds – ones that bind to a certain type of NMDA receptors very specifically.  Being able to target specific subtypes of the receptor is of enormous interest and has great therapeutic potential in a range of illnesses and injuries affecting the brain.”  

Filed under NMDA receptors nerve cells glutamate x-ray crystallography neurotoxicity neuroscience science

1,189 notes

‘Love hormone’ oxytocin carries unexpected side effect
The love hormone, the monogamy hormone, the cuddle hormone, the trust-me drug: oxytocin has many nicknames. That’s because this naturally occurring human hormone has recently been shown to help people with autism and schizophrenia overcome social deficits.
As a result, certain psychologists prescribe oxytocin off-label, to treat mild social unease in patients who don’t suffer from a diagnosed disorder. But that’s not such a good idea, according to researchers at Concordia’s Centre for Research in Human Development. Their recent study — published in Emotion, a journal of the American Psychological Association — shows that in healthy young adults, too much oxytocin can actually result in oversensitivity to the emotions of others.
With the help of psychology professor Mark Ellenbogen, PhD candidates Christopher Cardoso and Anne-Marie Linnen recruited 82 healthy young adults who showed no signs of schizophrenia, autism or related disorders. Half of the participants were given measured doses of oxytocin, while the rest were offered a placebo.
The participants then completed an emotion identification accuracy test in which they compared different facial expressions showing various emotional states. As expected, the test subjects who had taken oxytocin saw greater emotional intensity in the faces they were rating.
“For some, typical situations like dinner parties or job interviews can be a source of major social anxiety,” says Cardoso, the study’s lead author. “Many psychologists initially thought that oxytocin could be an easy fix in overcoming these worries. Our study proves that the hormone ramps up innate social reasoning skills, resulting in an emotional oversensitivity that can be detrimental in those who don’t have any serious social deficiencies.”
As Cardoso explains, “If your potential boss grimaces because she’s uncomfortable in her chair and you think she’s reacting negatively to what you’re saying, or if the guy you’re talking to at a party smiles to be friendly and you think he’s coming on to you, it can lead you to overreact — and that can be a real problem. That’s why we’re cautioning against giving oxytocin to people who don’t really need it.”
Ultimately, however, oxytocin does have the potential to help people with diagnosed disorders like autism to overcome social deficits.
But, says Cardoso, “The potential social benefits of oxytocin in most people may be countered by unintended negative consequences, like being too sensitive to emotional cues in everyday life.”

‘Love hormone’ oxytocin carries unexpected side effect

The love hormone, the monogamy hormone, the cuddle hormone, the trust-me drug: oxytocin has many nicknames. That’s because this naturally occurring human hormone has recently been shown to help people with autism and schizophrenia overcome social deficits.

As a result, certain psychologists prescribe oxytocin off-label, to treat mild social unease in patients who don’t suffer from a diagnosed disorder. But that’s not such a good idea, according to researchers at Concordia’s Centre for Research in Human Development. Their recent study — published in Emotion, a journal of the American Psychological Association — shows that in healthy young adults, too much oxytocin can actually result in oversensitivity to the emotions of others.

With the help of psychology professor Mark Ellenbogen, PhD candidates Christopher Cardoso and Anne-Marie Linnen recruited 82 healthy young adults who showed no signs of schizophrenia, autism or related disorders. Half of the participants were given measured doses of oxytocin, while the rest were offered a placebo.

The participants then completed an emotion identification accuracy test in which they compared different facial expressions showing various emotional states. As expected, the test subjects who had taken oxytocin saw greater emotional intensity in the faces they were rating.

“For some, typical situations like dinner parties or job interviews can be a source of major social anxiety,” says Cardoso, the study’s lead author. “Many psychologists initially thought that oxytocin could be an easy fix in overcoming these worries. Our study proves that the hormone ramps up innate social reasoning skills, resulting in an emotional oversensitivity that can be detrimental in those who don’t have any serious social deficiencies.”

As Cardoso explains, “If your potential boss grimaces because she’s uncomfortable in her chair and you think she’s reacting negatively to what you’re saying, or if the guy you’re talking to at a party smiles to be friendly and you think he’s coming on to you, it can lead you to overreact — and that can be a real problem. That’s why we’re cautioning against giving oxytocin to people who don’t really need it.”

Ultimately, however, oxytocin does have the potential to help people with diagnosed disorders like autism to overcome social deficits.

But, says Cardoso, “The potential social benefits of oxytocin in most people may be countered by unintended negative consequences, like being too sensitive to emotional cues in everyday life.”

Filed under oxytocin emotions emotional oversensitivity social deficits psychology neuroscience science

201 notes

The Unexpected Power of Baby Math

TAU researcher finds that adults still think about numbers like kids

image

Children understand numbers differently than adults. For kids, one and two seem much further apart then 101 and 102, because two is twice as big as one, and 102 is just a little bigger than 101. It’s only after years of schooling that we’re persuaded to see the numbers in both sets as only one integer apart on a number line.

Now Dror Dotan, a doctoral student at Tel Aviv University’s School of Education and Sagol School of Neuroscience and Prof. Stanislas Dehaene of the Collège de France, a leader in the field of numerical cognition, have found new evidence that educated adults retain traces of their childhood, or innate, number sense — and that it’s more powerful than many scientists think.

"We were surprised when we saw that people never completely stop thinking about numbers as they did when they were children," said Dotan. "The innate human number sense has an impact, even on thinking about double-digit numbers." The findings, a significant step forward in understanding how people process numbers, could contribute to the development of methods to more effectively educate or treat children with learning disabilities and people with brain injuries.

Digital proof of a primal sense

Educated adults understand numbers “linearly,” based on the familiar number line from 0 to infinity. But children and uneducated adults, like tribespeople in the Amazon, understand numbers “logarithmically” — in terms of what percentage one number is of another. To analyze how educated adults process numbers in real time, Dotan and Dehaene asked the participants in their study to place numbers on a number line displayed on an iPad using a finger.

Previous studies showed that people who understand numbers linearly perform the task differently than people who understand numbers logarithmically. For example, linear thinkers place the number 20 in the middle of a number line marked from 0 to 40. But logarithmic thinkers like children may place the number 6 in the middle of the number line, because 1 is about the same percentage of 6 as 6 is of 40.

On the iPad used in the study, the participants were shown a number line marked only with “0” on one end and “40” on the other. Numbers popped up one at a time at the top of the iPad screen, and the participants dragged a finger from the middle of the screen down to the place on the number line where they thought each number belonged. Software tracked the path the finger took.

Changing course

Statistical analysis of the results showed that the participants placed the numbers on the number line in a linear way, as expected. But surprisingly — for only a few hundred milliseconds — they appeared to be influenced by their innate number sense. In the case of 20, for example, the participants drifted slightly rightward with their finger — toward where 20 would belong in a ratio-based number line — and then quickly corrected course. The results provide some of the most direct evidence to date that the innate number sense remains active, even if largely dormant, in educated adults.

"It really looks like the two systems in the brain compete with each other," said Dotan.

Significantly, the drift effect was found with two-digit as well as one-digit numbers. Many researchers believe that people can only convert two-digit numbers into quantities using the learned linear numerical system, which processes the quantity of each digit separately — for example, 34 is processed as 3 tens plus 4 ones. But Dotan and Dehaene’s research showed that the innate number sense is, in fact, capable of handling the complexity of two-digit numbers as well.

(Source: aftau.org)

Filed under numerical cognition numbers number sense children adults psychology neuroscience science

199 notes

New genetic mutations shed light on schizophrenia
Researchers from the Broad Institute and several partnering institutions have taken a closer look at the human genome to learn more about the genetic underpinnings of schizophrenia. In two studies published this week in Nature (1, 2), scientists analyzed the exomes, or protein-coding regions, of people with schizophrenia and their healthy counterparts, pinpointing the sites of mutations and identifying patterns that reveal clues about the biology underlying the disorder.
Read more

New genetic mutations shed light on schizophrenia

Researchers from the Broad Institute and several partnering institutions have taken a closer look at the human genome to learn more about the genetic underpinnings of schizophrenia. In two studies published this week in Nature (1, 2), scientists analyzed the exomes, or protein-coding regions, of people with schizophrenia and their healthy counterparts, pinpointing the sites of mutations and identifying patterns that reveal clues about the biology underlying the disorder.

Read more

Filed under schizophrenia genetic mutations genetics genomics neuroscience science

147 notes

Researchers reveal more about how our brains control our arms
Ready, set, go.
Sometimes that’s how our brains work. When we anticipate a physical act, such as reaching for the keys we noticed on the table, the neurons that control the task adopt a state of readiness, like sprinters bent into a crouch.
Other times, however, our neurons must simply react, such as if someone were to toss us the keys without gesturing first, to prepare us to catch.
How do the neurons in the brain control planned versus unplanned arm movements?
Krishna Shenoy, a Stanford professor of electrical engineering, neurobiology (by courtesy) and bioengineering (affiliate), wanted to answer that question as part of his group’s ongoing efforts to develop and improve brain-controlled prosthetic devices.
In a paper published today in the journal Neuron, Shenoy and first author Katherine Cora Ames, a doctoral student in the Neurosciences Graduate Program, present a mathematical analysis of the brain activity of monkeys as they make anticipated and unanticipated reaching motions.
Monitoring the neurons
The experimental data came from recording the electrical activity of neurons in the brain that control motor and premotor functions. The idea was to observe and understand the activity levels of these neurons during experiments in which the monkeys made planned or reactive arm movements. What the researchers found is that when the monkeys knew what arm movement they were supposed to make and were simply waiting for the cue to act, electrical readings showed that the neurons went into what scientists call the prepare-and-hold state – the brain’s equivalent of ready, set, waiting for the cue to go.
But when the monkeys made unplanned or unexpected movements, the neurons did not go through the expected prepare-and-hold state. “This was a surprise,” Ames said.
Before the experiment, the researchers had believed that a prepare-and-hold state had to precede movement. In short, they thought the neurons had to go into a “ready, set” crouch before acting on the “go” command. But they discovered otherwise in three variations of an experiment involving similar arm movements.
Experimental design
In all three cases, the monkeys were trained to touch a target that appeared on a display screen.
During each motion, the researchers measured the electrical activity of the neurons in control of arm movements.
In one set of experiments, the monkeys were shown the target but were trained not to touch it until they got the “go” signal. This is called a delayed reach experiment. It served as the planned action.
In a second set of experiments the monkeys were trained to touch the target as soon as it appeared. This served as the unplanned action.
In a third variant, the position of the target was changed. It briefly appeared in one location on the screen. The target then reappeared in a different location. This required the monkeys to revise their movement plan.
Monkey see, then monkey do
Ames said that, in all three instances, the first information to reach the neurons was awareness of the target.
“Perception always occurred first,” Ames said.
Then, about 50 milliseconds later, some differences appeared in the data. When the monkeys had to wait for the go command, the brain recordings showed that the neurons went into a discernable prepare-and-hold state. But in the other two cases, the neurons did not enter the prepare-and-hold state.
Instead, roughly 50 milliseconds after the electrical readings showed evidence of perception, a change in neuronal activity signaled the command to touch the target; it came with no apparent further preparation between perception and action. “Ready, set” was unnecessary. In these instances, the neurons just said, “Go!”
Applications
“This study changes our view of how movement is controlled,” Ames said. “First you get the information about where to move. Then comes the decision to move. There is no specific prepare-and-hold stage unless you are waiting for the signal to move.”
These nuanced understandings are important to Shenoy. His lab develops and improves electronic systems that can convert neural activity into electronic signals in order to control a prosthetic arm or move the cursor on a computer screen.
One example of such efforts is the BrainGate clinical trial here at Stanford, now being conducted under U.S. Food & Drug Administration supervision, to test the safety of brain-controlled, computer cursor systems – “think-and-click” communication for people who can’t move.
“In addition to advancing basic brain science, these new findings will lead to better brain-controlled prosthetic arms and communication systems for people with paralysis,” Shenoy said.

Researchers reveal more about how our brains control our arms

Ready, set, go.

Sometimes that’s how our brains work. When we anticipate a physical act, such as reaching for the keys we noticed on the table, the neurons that control the task adopt a state of readiness, like sprinters bent into a crouch.

Other times, however, our neurons must simply react, such as if someone were to toss us the keys without gesturing first, to prepare us to catch.

How do the neurons in the brain control planned versus unplanned arm movements?

Krishna Shenoy, a Stanford professor of electrical engineering, neurobiology (by courtesy) and bioengineering (affiliate), wanted to answer that question as part of his group’s ongoing efforts to develop and improve brain-controlled prosthetic devices.

In a paper published today in the journal Neuron, Shenoy and first author Katherine Cora Ames, a doctoral student in the Neurosciences Graduate Program, present a mathematical analysis of the brain activity of monkeys as they make anticipated and unanticipated reaching motions.

Monitoring the neurons

The experimental data came from recording the electrical activity of neurons in the brain that control motor and premotor functions. The idea was to observe and understand the activity levels of these neurons during experiments in which the monkeys made planned or reactive arm movements. What the researchers found is that when the monkeys knew what arm movement they were supposed to make and were simply waiting for the cue to act, electrical readings showed that the neurons went into what scientists call the prepare-and-hold state – the brain’s equivalent of ready, set, waiting for the cue to go.

But when the monkeys made unplanned or unexpected movements, the neurons did not go through the expected prepare-and-hold state. “This was a surprise,” Ames said.

Before the experiment, the researchers had believed that a prepare-and-hold state had to precede movement. In short, they thought the neurons had to go into a “ready, set” crouch before acting on the “go” command. But they discovered otherwise in three variations of an experiment involving similar arm movements.

Experimental design

In all three cases, the monkeys were trained to touch a target that appeared on a display screen.

During each motion, the researchers measured the electrical activity of the neurons in control of arm movements.

In one set of experiments, the monkeys were shown the target but were trained not to touch it until they got the “go” signal. This is called a delayed reach experiment. It served as the planned action.

In a second set of experiments the monkeys were trained to touch the target as soon as it appeared. This served as the unplanned action.

In a third variant, the position of the target was changed. It briefly appeared in one location on the screen. The target then reappeared in a different location. This required the monkeys to revise their movement plan.

Monkey see, then monkey do

Ames said that, in all three instances, the first information to reach the neurons was awareness of the target.

“Perception always occurred first,” Ames said.

Then, about 50 milliseconds later, some differences appeared in the data. When the monkeys had to wait for the go command, the brain recordings showed that the neurons went into a discernable prepare-and-hold state. But in the other two cases, the neurons did not enter the prepare-and-hold state.

Instead, roughly 50 milliseconds after the electrical readings showed evidence of perception, a change in neuronal activity signaled the command to touch the target; it came with no apparent further preparation between perception and action. “Ready, set” was unnecessary. In these instances, the neurons just said, “Go!”

Applications

“This study changes our view of how movement is controlled,” Ames said. “First you get the information about where to move. Then comes the decision to move. There is no specific prepare-and-hold stage unless you are waiting for the signal to move.”

These nuanced understandings are important to Shenoy. His lab develops and improves electronic systems that can convert neural activity into electronic signals in order to control a prosthetic arm or move the cursor on a computer screen.

One example of such efforts is the BrainGate clinical trial here at Stanford, now being conducted under U.S. Food & Drug Administration supervision, to test the safety of brain-controlled, computer cursor systems – “think-and-click” communication for people who can’t move.

“In addition to advancing basic brain science, these new findings will lead to better brain-controlled prosthetic arms and communication systems for people with paralysis,” Shenoy said.

Filed under arm movement prosthetics BCI neural activity robotics neurons neuroscience science

99 notes

Long-term spinal cord stimulation stalls symptoms of Parkinson’s-like disease
Researchers at Duke Medicine have shown that continuing spinal cord stimulation appears to produce improvements in symptoms of Parkinson’s disease, and may protect critical neurons from injury or deterioration.
The study, performed in rats, is published online Jan. 23, 2014, in the journal Scientific Reports. It builds on earlier findings from the Duke team that stimulating the spinal cord with electrical signals temporarily eased symptoms of the neurological disorder in rodents.
"Finding novel treatments that address both the symptoms and progressive nature of Parkinson’s disease is a major priority," said the study’s senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. "We need options that are safe, affordable, effective and can last a long time. Spinal cord stimulation has the potential to do this for people with Parkinson’s disease."
Parkinson’s disease is caused by the progressive loss of neurons that produce dopamine, an essential molecule in the brain, and affects movement, muscle control and balance.
L-dopa, the standard drug treatment for Parkinson’s disease, works by replacing dopamine. While L-dopa helps many people, it can cause side effects and lose its effectiveness over time. Deep brain stimulation, which emits electrical signals from an implant in the brain, has emerged as another valuable therapy, but less than 5 percent of those with Parkinson’s disease qualify for this treatment.
"Even though deep brain stimulation can be very successful, the number of patients who can take advantage of this therapy is small, in part because of the invasiveness of the procedure," Nicolelis said.
In 2009, Nicolelis and his colleagues reported in the journal Science that they developed a device for rodents that sends electrical stimulation to the dorsal column, a main sensory pathway in the spinal cord carrying information from the body to the brain. The device was attached to the surface of the spinal cord in rodents with depleted levels of dopamine, mimicking the biologic characteristics of someone with Parkinson’s disease. When the stimulation was turned on, the animals’ slow, stiff movements were replaced with the active behaviors of healthy mice and rats.
Because research on spinal cord stimulation in animals has been limited to the stimulation’s acute effects, in the current study, Nicolelis and his colleagues investigated the long-term effects of the treatment in rats with the Parkinson’s-like disease.
For six weeks, the researchers applied electrical stimulation to a particular location in the dorsal column of the rats’ spinal cords twice a week for 30-minute sessions. They observed a significant improvement in the rats’ symptoms, including improved motor skills and a reversal of severe weight loss.
In addition to the recovery in clinical symptoms, the stimulation was associated with better survival of neurons and a higher density of dopaminergic innervation in two brain regions controlling movement – the loss of which cause Parkinson’s disease in humans. The findings suggest that the treatment protects against the loss or damage of neurons.
Clinicians are currently using a similar application of dorsal column stimulation to manage certain chronic pain syndromes in humans. Electrodes implanted over the spinal cord are connected to a portable generator, which produces electrical signals that create a tingling sensation to relieve pain. Studies in a small number of humans worldwide have shown that dorsal column stimulation may also be effective in restoring motor function in people with Parkinson’s disease.
"This is still a limited number of cases, so studies like ours are important in examining the basic science behind the treatment and the potential mechanisms of why it is effective," Nicolelis said.
The researchers are continuing to investigate how spinal cord stimulation works, and are beginning to explore using the technology in other neurological motor disorders.

Long-term spinal cord stimulation stalls symptoms of Parkinson’s-like disease

Researchers at Duke Medicine have shown that continuing spinal cord stimulation appears to produce improvements in symptoms of Parkinson’s disease, and may protect critical neurons from injury or deterioration.

The study, performed in rats, is published online Jan. 23, 2014, in the journal Scientific Reports. It builds on earlier findings from the Duke team that stimulating the spinal cord with electrical signals temporarily eased symptoms of the neurological disorder in rodents.

"Finding novel treatments that address both the symptoms and progressive nature of Parkinson’s disease is a major priority," said the study’s senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. "We need options that are safe, affordable, effective and can last a long time. Spinal cord stimulation has the potential to do this for people with Parkinson’s disease."

Parkinson’s disease is caused by the progressive loss of neurons that produce dopamine, an essential molecule in the brain, and affects movement, muscle control and balance.

L-dopa, the standard drug treatment for Parkinson’s disease, works by replacing dopamine. While L-dopa helps many people, it can cause side effects and lose its effectiveness over time. Deep brain stimulation, which emits electrical signals from an implant in the brain, has emerged as another valuable therapy, but less than 5 percent of those with Parkinson’s disease qualify for this treatment.

"Even though deep brain stimulation can be very successful, the number of patients who can take advantage of this therapy is small, in part because of the invasiveness of the procedure," Nicolelis said.

In 2009, Nicolelis and his colleagues reported in the journal Science that they developed a device for rodents that sends electrical stimulation to the dorsal column, a main sensory pathway in the spinal cord carrying information from the body to the brain. The device was attached to the surface of the spinal cord in rodents with depleted levels of dopamine, mimicking the biologic characteristics of someone with Parkinson’s disease. When the stimulation was turned on, the animals’ slow, stiff movements were replaced with the active behaviors of healthy mice and rats.

Because research on spinal cord stimulation in animals has been limited to the stimulation’s acute effects, in the current study, Nicolelis and his colleagues investigated the long-term effects of the treatment in rats with the Parkinson’s-like disease.

For six weeks, the researchers applied electrical stimulation to a particular location in the dorsal column of the rats’ spinal cords twice a week for 30-minute sessions. They observed a significant improvement in the rats’ symptoms, including improved motor skills and a reversal of severe weight loss.

In addition to the recovery in clinical symptoms, the stimulation was associated with better survival of neurons and a higher density of dopaminergic innervation in two brain regions controlling movement – the loss of which cause Parkinson’s disease in humans. The findings suggest that the treatment protects against the loss or damage of neurons.

Clinicians are currently using a similar application of dorsal column stimulation to manage certain chronic pain syndromes in humans. Electrodes implanted over the spinal cord are connected to a portable generator, which produces electrical signals that create a tingling sensation to relieve pain. Studies in a small number of humans worldwide have shown that dorsal column stimulation may also be effective in restoring motor function in people with Parkinson’s disease.

"This is still a limited number of cases, so studies like ours are important in examining the basic science behind the treatment and the potential mechanisms of why it is effective," Nicolelis said.

The researchers are continuing to investigate how spinal cord stimulation works, and are beginning to explore using the technology in other neurological motor disorders.

Filed under spinal cord parkinson's disease spinal cord stimulation dopamine neurons neuroscience science

152 notes

Researchers identify innate channel that protects against pain

Scientists have identified a channel present in many pain detecting sensory neurons that acts as a ‘brake’, limiting spontaneous pain. It is hoped that the new research, published today [22 January] in the Journal of Neuroscience, will ultimately contribute to new pain relief treatments.

Spontaneous pain is ongoing pathological pain that occurs constantly (slow burning pain) or intermittently (sharp shooting pain) without any obvious immediate cause or trigger. The slow burning pain is the cause of much suffering and debilitation. Because the mechanisms underlying this type of slow burning pain are poorly understood, it remains very difficult to treat effectively.

Spontaneous pain of peripheral origin is pathological, and is associated with many types of disease, inflammation or damage of tissues, organs or nerves (neuropathic pain). Examples of neuropathic pain are nerve injury/crush, post-operative pain, and painful diabetic neuropathy.

Previous research has shown that this spontaneous burning pain is caused by continuous activity in small sensory nerve fibers, known as C-fiber nociceptors (pain neurons). Greater activity translates into greater pain, but what causes or limits this activity remained poorly understood.

Now, new research from the University of Bristol, has identified a particular ion channel present exclusively in these C-fiber nociceptors This ion channel, known as TREK2, is present in the membranes of these neurons, and the researchers showed that it provides a natural innate protection against this pain.

Ion channels are specialised proteins that are selectively permeable to particular ions. They form pores through the neuronal membrane. Leak potassium channels are unusual, in that they are open most of the time allowing positive potassium ions (K+) to leak out of the cell. This K+ leakage is the main cause of the negative membrane potentials in all neurons. TREK2 is one of these leak potassium channels. Importantly, the C-nociceptors that express TREK2 have much more negative membrane potentials than those that do not.

Researchers showed that when TREK2 was removed from the proximity of the cell membrane, the potential in those neurons became less negative. In addition, when the neuron was prevented from synthesizing the TREK2, the membrane potential also became less negative.

They also found that spontaneous pain associated with skin inflammation, was increased by reducing the levels of synthesis of TREK2 in these C-fiber neurons.

They concluded that in these C-fiber nociceptors the TREK2 keeps membrane potentials more negative, stabilizing their membrane potential, reducing firing and thus limiting the amount of spontaneous burning pain.

Professor Sally Lawson, from the School of Physiology and Pharmacology at Bristol University, explained: “It became evident that TREK2 kept the C-fiber nociceptor membrane at a more negative potential. Despite the difficulties inherent in the study of spontaneous pain, and the lack of any drugs that can selectively block or activate TREK2, we demonstrated that TREK2 in C-fiber nociceptors is important for stabilizing their membrane potential and decreasing the likelihood of firing. It became apparent that TREK2 was thus likely to act as a natural innate protection against pain. Our data supported this, indicating that in chronic pain states, TREK2 is acting as a brake on the level of spontaneous pain.”

Dr Cristian Acosta, the first author on the paper and now working at the Institute of Histology and Embriology of Mendoza in Argentina, said “Given the role of TREK2 in protecting against spontaneous pain, it is important to advance our understanding of the regulatory mechanisms controlling its expression and trafficking in these C-fiber nociceptors. We hope that this research will enable development of methods of enhancing the actions of TREK2 that could potentially some years hence provide relief for sufferers of ongoing spontaneous burning pain.”

(Source: eurekalert.org)

Filed under pain sensory neurons ion channels c-fiber nociceptors TREK2 neuroscience science

183 notes

Fast eye movements: A possible indicator of more impulsive decision-making

Using a simple study of eye movements, Johns Hopkins scientists report evidence that people who are less patient tend to move their eyes with greater speed. The findings, the researchers say, suggest that the weight people give to the passage of time may be a trait consistently used throughout their brains, affecting the speed with which they make movements, as well as the way they make certain decisions.

image

Caption: Despite claims to the contrary, the eyes of the Mona Lisa do not make saccades. Credit: Leonardo da Vinci

In a summary of the research to be published Jan. 21 in The Journal of Neuroscience, the investigators note that a better understanding of how the human brain evaluates time when making decisions might also shed light on why malfunctions in certain areas of the brain make decision-making harder for those with neurological disorders like schizophrenia, or for those who have experienced brain injuries.

Principal investigator Reza Shadmehr, Ph.D., professor of biomedical engineering and neuroscience at The Johns Hopkins University, and his team set out to understand why some people are willing to wait and others aren’t. “When I go to the pharmacy and see a long line, how do I decide how long I’m willing to stand there?” he asks. “Are those who walk away and never enter the line also the ones who tend to talk fast and walk fast, perhaps because of the way they value time in relation to rewards?”

To address the question, the Shadmehr team used very simple eye movements, known as saccades, to stand in for other bodily movements. Saccades are the motions that our eyes make as we focus on one thing and then another. “They are probably the fastest movements of the body,” says Shadmehr. “They occur in just milliseconds.” Human saccades are fastest when we are teenagers and slow down as we age, he adds.

In earlier work, using a mathematical theory, Shadmehr and colleagues had shown that, in principle, the speed at which people move could be a reflection of the way the brain calculates the passage of time to reduce the value of a reward. In the current study, the team wanted to test the idea that differences in how subjects moved were a reflection of differences in how they evaluated time and reward.

For the study, the team first asked healthy volunteers to look at a screen upon which dots would appear one at a time –– first on one side of the screen, then on the other, then back again. A camera recorded their saccades as they looked from one dot to the other. The researchers found a lot of variability in saccade speed among individuals but very little variation within individuals, even when tested at different times and on different days. Shadmehr and his team concluded that saccade speed appears to be an attribute that varies from person to person. “Some people simply make fast saccades,” he says.

To determine whether saccade speed correlated with decision-making and impulsivity, the volunteers were told to watch the screen again. This time, they were given visual commands to look to the right or to the left. When they responded incorrectly, a buzzer sounded.

After becoming accustomed to that part of the test, they were forewarned that during the following round of testing, if they followed the command right away, they would be wrong 25 percent of the time. In those instances, after an undetermined amount of time, the first command would be replaced by a second command to look in the opposite direction.

To pinpoint exactly how long each volunteer was willing to wait to improve his or her accuracy on that phase of the test, the researchers modified the length of time between the two commands based on a volunteer’s previous decision. For example, if a volunteer chose to wait until the second command, the researchers increased the time they had to wait each consecutive time until they determined the maximum time the volunteer was willing to wait — only 1.5 seconds for the most patient volunteer. If a volunteer chose to act immediately, the researchers decreased the wait time to find the minimum time the volunteer was willing to wait to improve his or her accuracy.

When the speed of the volunteers’ saccades was compared to their impulsivity during the patience test, there was a strong correlation. “It seems that people who make quick movements, at least eye movements, tend to be less willing to wait,” says Shadmehr. “Our hypothesis is that there may be a fundamental link between the way the nervous system evaluates time and reward in controlling movements and in making decisions. After all, the decision to move is motivated by a desire to improve one’s situation, which is a strong motivating factor in more complex decision-making, too.”

(Source: eurekalert.org)

Filed under eye movements saccades decision making patience psychology neuroscience science

free counters