Neuroscience

Articles and news from the latest research reports.

Posts tagged science

496 notes

Gesturing with hands is a powerful tool for children’s math learning
Children who use their hands to gesture during a math lesson gain a deep understanding of the problems they are taught, according to new research from University of Chicago’s Department of Psychology.
Previous research has found that gestures can help children learn. This study in particular was designed to answer whether abstract gesture can support generalization beyond a particular problem and whether abstract gesture is a more effective teaching tool than concrete action.
“We found that acting gave children a relatively shallow understanding of a novel math concept, whereas gesturing led to deeper and more flexible learning,” explained the study’s lead author, Miriam A. Novack, a PhD student in psychology.
The study, “From action to abstraction: Using the hands to learn math,” is published online by Psychological Science.
The researchers taught third-grade children a strategy for solving one type of mathematical equivalence problem, for example, 4 + 2 + 6 = ____ + 6. They then tested the students on similar mathematical equivalence problems to determine how well they understood the underlying principle.
The researchers randomly assigned 90 children to conditions in which they learned using different kinds of physical interaction with the material. In one group, children picked up magnetic number tiles and put them in the proper place in the formula. For example, for the problem 4 + 2 + 6 = ___ + 6, they picked up the 4 and 2 and placed them on a magnetic whiteboard. Another group mimed that action without actually touching the tiles, and a third group was taught to use abstract gestures with their hands to solve the equations. In the abstract gesture group, children were taught to produce a V-point gesture with their fingers under two of the numbers, metaphorically grouping them, followed by pointing a finger at the blank in the equation.
The children were tested before and after solving each problem in the lesson, including problems that required children to generalize beyond what they had learned in grouping the numbers. For example, they were given problems that were similar to the original one, but had different numbers on both sides of the equation.
Children in all three groups learned the problems they had been taught during the lesson. But only children who gestured during the lesson were successful on the generalization problems.
“Abstract gesture was most effective in encouraging learners to generalize the knowledge they had gained during instruction, action least effective, and concrete gesture somewhere in between,” said senior author Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in Psychology. “Our findings provide the first evidence that gesture not only supports learning a task at hand but, more importantly, leads to generalization beyond the task. Children appear to learn underlying principles from their actions only insofar as those actions can be interpreted symbolically.”

Gesturing with hands is a powerful tool for children’s math learning

Children who use their hands to gesture during a math lesson gain a deep understanding of the problems they are taught, according to new research from University of Chicago’s Department of Psychology.

Previous research has found that gestures can help children learn. This study in particular was designed to answer whether abstract gesture can support generalization beyond a particular problem and whether abstract gesture is a more effective teaching tool than concrete action.

“We found that acting gave children a relatively shallow understanding of a novel math concept, whereas gesturing led to deeper and more flexible learning,” explained the study’s lead author, Miriam A. Novack, a PhD student in psychology.

The study, “From action to abstraction: Using the hands to learn math,” is published online by Psychological Science.

The researchers taught third-grade children a strategy for solving one type of mathematical equivalence problem, for example, 4 + 2 + 6 = ____ + 6. They then tested the students on similar mathematical equivalence problems to determine how well they understood the underlying principle.

The researchers randomly assigned 90 children to conditions in which they learned using different kinds of physical interaction with the material. In one group, children picked up magnetic number tiles and put them in the proper place in the formula. For example, for the problem 4 + 2 + 6 = ___ + 6, they picked up the 4 and 2 and placed them on a magnetic whiteboard. Another group mimed that action without actually touching the tiles, and a third group was taught to use abstract gestures with their hands to solve the equations. In the abstract gesture group, children were taught to produce a V-point gesture with their fingers under two of the numbers, metaphorically grouping them, followed by pointing a finger at the blank in the equation.

The children were tested before and after solving each problem in the lesson, including problems that required children to generalize beyond what they had learned in grouping the numbers. For example, they were given problems that were similar to the original one, but had different numbers on both sides of the equation.

Children in all three groups learned the problems they had been taught during the lesson. But only children who gestured during the lesson were successful on the generalization problems.

“Abstract gesture was most effective in encouraging learners to generalize the knowledge they had gained during instruction, action least effective, and concrete gesture somewhere in between,” said senior author Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in Psychology. “Our findings provide the first evidence that gesture not only supports learning a task at hand but, more importantly, leads to generalization beyond the task. Children appear to learn underlying principles from their actions only insofar as those actions can be interpreted symbolically.”

Filed under mathematics learning psychology neuroscience science

170 notes

Researchers Model a Key Breaking Point Involved in Traumatic Brain Injury 
Even the mildest form of a traumatic brain injury, better known as a concussion, can deal permanent, irreparable damage. Now, an interdisciplinary team of researchers at the University of Pennsylvania is using mathematical modeling to better understand the mechanisms at play in this kind of injury, with an eye toward protecting the brain from its long-term consequences.
Their recent findings, published in the Biophysical Journal, shed new light on the mechanical properties of a critical brain protein and its role in the elasticity of axons, the long, tendril-like part of brain cells. This protein, known as tau, helps explain the apparent contradiction this elasticity presents. If axons are so stretchy, why do they break under the strain of a traumatic brain injury?    
Tau’s own elastic properties reveal why rapid impacts deal permanent damage to structures within axons, when applying the same force more slowly causes them to safely stretch. This understanding can now be used to make computer models of the brain more realistic and potentially can be applied toward tau-related diseases, such as Alzheimer’s.
The team consists of Vivek Shenoy, professor of materials science and engineering in the School of Engineering and Applied Science, Hossein Ahmadzadeh, a member of Shenoy’s lab, and Douglas Smith, professor of neurosurgery in Penn’s Perelman School of Medicine and director of the Penn Center for Brain Injury and Repair. 
“One of the main things you see in the brains of patients who have died because of a TBI is swellings along the axons,” Shenoy said. “Inside axons are microtubules, which act like tracks for transporting molecular cargo along the axon. When they break, there’s an interruption in the flow of this cargo and it starts to accumulate, which is why you get these swellings.”  
Smith had previously studied the mechanical properties of axons as a whole. By patterning axons in culture in parallel tracts, Smith and his colleagues could apply a stretch to the axons at different forces and speeds and measure how they responded.
“What we saw is that with slow loading rates, axons can stretch up to at least 100 percent with no signs of damage,” Smith said. “But at faster rates, axons start displaying the same swellings you see in the TBI patients. This process occurs even with relatively short stretch at fast rates. So the rate at which stretch is applied is the important component, such as occurs during rapid movement of the brain and stretching of axons due to head impact from a fall, assault or automobile crash.”
This observation still did not explain to researchers why microtubules, the stiffest part of the axon, were the parts that were breaking. To solve that puzzle, the researchers had to delve even deeper into their structure.
Microtubules are closely packed together inside axons, somewhat like a bundle of straws. Binding the individual straws together is the protein tau. Other biophysical modelers had previously accounted for the geometry and elastic properties of the axon during a stretching injury based on Smith’s work but did not have good data for representing tau’s role in the overall behavior of the system when it is loaded with stress over different lengths of time. 
“You need to know the elastic properties of tau,” Shenoy said, “because when you load the microtubules with stress, you load the tau as well. How these two parts distribute the stress between them is going to have major impact on the system as a whole.”
Shenoy and his colleagues had a sense of tau’s elastic properties but did not have hard numbers until a 2011 experiment from a Swiss and German research team physically stretched out lengths of tau by plucking it with the tip of an atomic force microscope.
“This experiment demonstrated that tau is viscoelastic,” Shenoy said. “Like Silly Putty, when you add stress to it slowly, it stretches a lot. But if you add stress to it rapidly, like in an impact, it breaks.”
This behavior is because the strands of tau protein are coiled up and bonded to themselves in different places. Pulled slowly, those bonds can come undone, lengthening the strand without breaking it. 
“The damage in traumatic brain injury occurs when the microtubules stretch but the tau doesn’t, as they can’t stretch as far,” Shenoy said. “If you’re in a situation where the tau doesn’t stretch, such as what happens in fast strain rates, then all the strain will transfer to the microtubules and cause them to break.”
With a comprehensive model of the tau-microtubule system, the researchers were able to boil down the outcome of rapid stress loading to equations with only a handful of variables. This mathematical understanding allowed the researchers to produce a phase diagram that shows the dividing line between strain rates that leave permanent damage versus safe and reversible loading and unloading of stress.
“Predicting what kind of impacts will cause these strain rates is still a complicated problem,” Shenoy said. “I might be able to measure the force of the impact when it hits someone’s head, but that force then has to make its way down to the axons, which depends on a lot of different things.
“You need a multiscale model, and our work will be an input to those models on the smallest scale.”
In the longer term, knowing the parameters that lead to irreversible damage could lead to better understanding of brain injuries and diseases and to new preventive measures. It may even be possible to design drugs that alter microtubule stability and elasticity of axons in traumatic brain injury; Smith’s group has demonstrated that treatment with the microtubule-stabilizing drug taxol reduced the extent of axon swellings and degeneration after stretch injury.    
“Intriguingly, it may be no coincidence that tau is also the same protein that forms neurofibrillary tangles, one of the hallmark brain pathologies of chronic traumatic encephalopathy, or CTE, which is linked to a history of concussions and higher levels of TBI,” said Smith. “Uncovering the role of tau at the time of trauma may provide insight into how it is involved in long-term degenerative processes.”

Researchers Model a Key Breaking Point Involved in Traumatic Brain Injury

Even the mildest form of a traumatic brain injury, better known as a concussion, can deal permanent, irreparable damage. Now, an interdisciplinary team of researchers at the University of Pennsylvania is using mathematical modeling to better understand the mechanisms at play in this kind of injury, with an eye toward protecting the brain from its long-term consequences.

Their recent findings, published in the Biophysical Journal, shed new light on the mechanical properties of a critical brain protein and its role in the elasticity of axons, the long, tendril-like part of brain cells. This protein, known as tau, helps explain the apparent contradiction this elasticity presents. If axons are so stretchy, why do they break under the strain of a traumatic brain injury?    

Tau’s own elastic properties reveal why rapid impacts deal permanent damage to structures within axons, when applying the same force more slowly causes them to safely stretch. This understanding can now be used to make computer models of the brain more realistic and potentially can be applied toward tau-related diseases, such as Alzheimer’s.

The team consists of Vivek Shenoy, professor of materials science and engineering in the School of Engineering and Applied Science, Hossein Ahmadzadeh, a member of Shenoy’s lab, and Douglas Smith, professor of neurosurgery in Penn’s Perelman School of Medicine and director of the Penn Center for Brain Injury and Repair. 

“One of the main things you see in the brains of patients who have died because of a TBI is swellings along the axons,” Shenoy said. “Inside axons are microtubules, which act like tracks for transporting molecular cargo along the axon. When they break, there’s an interruption in the flow of this cargo and it starts to accumulate, which is why you get these swellings.”  

Smith had previously studied the mechanical properties of axons as a whole. By patterning axons in culture in parallel tracts, Smith and his colleagues could apply a stretch to the axons at different forces and speeds and measure how they responded.

“What we saw is that with slow loading rates, axons can stretch up to at least 100 percent with no signs of damage,” Smith said. “But at faster rates, axons start displaying the same swellings you see in the TBI patients. This process occurs even with relatively short stretch at fast rates. So the rate at which stretch is applied is the important component, such as occurs during rapid movement of the brain and stretching of axons due to head impact from a fall, assault or automobile crash.”

This observation still did not explain to researchers why microtubules, the stiffest part of the axon, were the parts that were breaking. To solve that puzzle, the researchers had to delve even deeper into their structure.

Microtubules are closely packed together inside axons, somewhat like a bundle of straws. Binding the individual straws together is the protein tau. Other biophysical modelers had previously accounted for the geometry and elastic properties of the axon during a stretching injury based on Smith’s work but did not have good data for representing tau’s role in the overall behavior of the system when it is loaded with stress over different lengths of time. 

“You need to know the elastic properties of tau,” Shenoy said, “because when you load the microtubules with stress, you load the tau as well. How these two parts distribute the stress between them is going to have major impact on the system as a whole.”

Shenoy and his colleagues had a sense of tau’s elastic properties but did not have hard numbers until a 2011 experiment from a Swiss and German research team physically stretched out lengths of tau by plucking it with the tip of an atomic force microscope.

“This experiment demonstrated that tau is viscoelastic,” Shenoy said. “Like Silly Putty, when you add stress to it slowly, it stretches a lot. But if you add stress to it rapidly, like in an impact, it breaks.”

This behavior is because the strands of tau protein are coiled up and bonded to themselves in different places. Pulled slowly, those bonds can come undone, lengthening the strand without breaking it. 

“The damage in traumatic brain injury occurs when the microtubules stretch but the tau doesn’t, as they can’t stretch as far,” Shenoy said. “If you’re in a situation where the tau doesn’t stretch, such as what happens in fast strain rates, then all the strain will transfer to the microtubules and cause them to break.”

With a comprehensive model of the tau-microtubule system, the researchers were able to boil down the outcome of rapid stress loading to equations with only a handful of variables. This mathematical understanding allowed the researchers to produce a phase diagram that shows the dividing line between strain rates that leave permanent damage versus safe and reversible loading and unloading of stress.

“Predicting what kind of impacts will cause these strain rates is still a complicated problem,” Shenoy said. “I might be able to measure the force of the impact when it hits someone’s head, but that force then has to make its way down to the axons, which depends on a lot of different things.

“You need a multiscale model, and our work will be an input to those models on the smallest scale.”

In the longer term, knowing the parameters that lead to irreversible damage could lead to better understanding of brain injuries and diseases and to new preventive measures. It may even be possible to design drugs that alter microtubule stability and elasticity of axons in traumatic brain injury; Smith’s group has demonstrated that treatment with the microtubule-stabilizing drug taxol reduced the extent of axon swellings and degeneration after stretch injury.    

“Intriguingly, it may be no coincidence that tau is also the same protein that forms neurofibrillary tangles, one of the hallmark brain pathologies of chronic traumatic encephalopathy, or CTE, which is linked to a history of concussions and higher levels of TBI,” said Smith. “Uncovering the role of tau at the time of trauma may provide insight into how it is involved in long-term degenerative processes.”

Filed under TBI brain injury concussion tau protein microtubules neuroscience science

246 notes

Outside the body our memories fail us

New research from Karolinska Institutet and Umeå University in Sweden demonstrates for the first time that there is a close relationship between body perception and the ability to remember. For us to be able to store new memories from our lives, we need to feel that we are in our own body. According to researchers, the results could be of major importance in understanding the memory problems that psychiatric patients often exhibit.

The memories of what happened on the first day of school are an example of an episodic memory. How these memories are created and how the role that the perception of one’s own body has when storing memories has long been inconclusive. Swedish researchers can now demonstrate that volunteers who experience an exciting event whilst perceiving an illusion of being outside their own body exhibit a form of memory loss.

“It is already evident that people who have suffered psychiatric conditions in which they felt that they were not in their own body have fragmentary memories of what actually occurred”, says Loretxu Bergouignan, principal author of the current study. “We wanted to see how this manifests itself in healthy subjects.”

The study, which is published in the scientific journal PNAS, involved a total of 84 students reading about and undergoing four oral questioning sessions. To make these sessions extra memorable, an actor (Peter Bergared) took up the role of examiner – a (fictional) very eccentric professor at Karolinska Institutet. Two of the interrogations were perceived from a first person perspective from their own bodies in the usual way, while the participants in the other two sessions experienced a created illusion of being outside their own body. In both cases, the participants wore virtual reality goggles and earphones. One week later, they either underwent memory testing where they had to recall the events and provide details about what had happened, in which order, and what they felt, or they had to try to remember the events while they underwent brain imaging with functional magnetic resonance imaging (fMRI).

It then turned out that the participants remembered the ‘out-of-body’ interrogations significantly worse than those experienced from the normal ‘In body’ perspective. This was the case despite the fact that they responded equally well to the questions from each situation and also indicated that they experienced the same level of emotion. The fMRI scans further revealed a crucial difference in activity in the portion of the temporal lobe – the hippocampus – that is known to be central for episodic memories.

“When they tried to remember what happened during the interrogations experienced out-of-body, activity in the hippocampus was eliminated, unlike when they remembered the other situations. However, we could see activity in the frontal lobe cortex, so they were really making an effort to remember”, says professor Henrik Ehrsson, the research group leader behind the study. 

The researchers’ interpretation of the results is that there is a close relationship between body experience and memory. Our brain constantly creates the experience of one’s own body in space by combining information from multiple senses: sight, hearing, touch, and more. When a memory is created, it is the task of the hippocampus to link all the information found in the cerebral cortex into a unified memory for further long-term storage. During the experience of being outside one’s body, this memory storage process is disturbed, whereupon the brain creates fragmentary memories instead.

“We believe that this new knowledge may be important for future research on memory disorders in a number of psychiatric conditions such as post-traumatic stress disorder, borderline personality disorder and certain psychoses where patients have dissociative experiences,” says Loretxu Bergouignan.

(Source: news.cision.com)

Filed under hippocampus frontal lobe body perception memory neuroimaging neuroscience science

149 notes

Shedding a light on pain: A technique developed by Stanford bioengineers could lead to new treatments
The mice in Scott Delp’s lab, unlike their human counterparts, can get pain relief from the glow of a yellow light.
Right now these mice are helping scientists to study pain – how and why it occurs and why some people feel it so intensely without any obvious injury. But Delp, a professor of bioengineering and mechanical engineering, hopes one day the work he does with these mice can also help people who are in chronic, debilitating pain.
"This is an entirely new approach to study a huge public health issue," Delp said. "It’s a completely new tool that is now available to neuroscientists everywhere." He is the senior author of a research paper published Feb. 16 in Nature Biotechnology.
A switch for pain
The mice are modified with gene therapy to have pain-sensing nerves that can be controlled by light. One color of light makes the mice more sensitive to pain. Another reduces pain. The scientists shone a light on the paws of mice through the Plexiglas bottom of the cage.
Graduate students Shrivats Iyer and Kate Montgomery, who led the study, say it opens the door to future experiments to understand the nature of pain and also touch and other sensations that are part of our daily lives but little understood.
"The fact that we can give a mouse an injection and two weeks later shine a light on its paw to change the way it senses pain is very powerful," Iyer said.
For example, increasing or decreasing the sensation of pain in these mice could help scientists understand why pain seems to continue in people after an injury has healed. Does persistent pain change those nerves in some way? If so, how can they be changed back to a state where, in the absence of an injury, they stop sending searing messages of pain to the brain?
Leaders at the National Institutes of Health agree that the work could have important implications for treating pain. “This powerful approach shows great potential for helping the millions who suffer pain from nerve damage,” said Linda Porter, the pain policy adviser at the National Institute of Neurological Disorders and Stroke and a leader of the NIH’s Pain Consortium.
"Now, with a flick of a switch, scientists may be able to rapidly test new pain-relieving medications and, one day, doctors may be able to use light to relieve pain," she said.
Accidental discovery
The researchers took advantage of a technique called optogenetics, which involves light-sensitive proteins called opsins that are inserted into the nerves. Optogenetics was developed by Delp’s colleague Karl Deisseroth, a co-author of the journal article. He has used the technique as a way of activating precise regions of the brain to better understand how the brain functions. Deisseroth is a professor of bioengineering, psychiatry and behavioral sciences.
Delp, who has an interest in muscles and movement, saw the potential for using optogenetics not just for studying the brain – interesting though those studies may be – but also for studying the many nerves outside the brain. These are the nerves that control movement, pain, touch and other sensations throughout our body, and that are involved in diseases such as amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease.
A few years ago Stanford Bio-X, which encourages interdisciplinary projects such as this one, supported Delp and Deisseroth in their efforts to use optogenetics to control the nerves that excite muscles. In the process of doing that work, Delp said, his student at the time, Michael Llewellyn, occasionally found that he had placed the opsins into nerves that signal pain rather than those that control muscle.
That accident sparked a new line of research. Delp said, “We thought, ‘Wow, we’re getting pain neurons; that could be really important.’” He suggested that Montgomery and Iyer focus on those pain nerves that had been a byproduct of the muscle work.
A faster approach
A key component of the work was a new approach to quickly incorporate opsins into the nerves of mice. The researchers started with a virus that had been engineered to contain the DNA that produces the opsin. Then they injected those modified viruses directly into mouse nerves. Weeks later, only the nerves that control pain had incorporated the opsin proteins and would fire, or be less likely to fire, in response to different colors of light.
The speed of the viral approach makes it very flexible, both for this pain work and for future studies. Researchers are developing newer forms of opsins with different properties, such as responding to different colors of light. “Because we used a viral approach we could, in the future, quickly turn around and use newer opsins,” said Montgomery, who is a Stanford Bio-X fellow.
This entire project, which spans bioengineering, neuroscience and psychiatry, is one Delp says could never have happened without the environment at Stanford that supports collaboration across departments. The pain portion of the research came out of support from NeuroVentures, which was a project incubated within Bio-X to support the intersection of neuroscience and engineering or other disciplines. That project was so successful it has spun off into the Stanford Neurosciences Institute, of which Delp is now a deputy director.
Delp said that many challenges must be met before results of these experiments – either new drugs based on what they learn, or optogenetics directly – could become available to people but that he always has that as a goal.
"Developing a new therapy from the ground up would be incredibly rewarding," he said. "Most people don’t get to do that in their careers."
Delp and Deisseroth have started a company called Circuit Therapeutics to develop therapies based on optogenetics.

Shedding a light on pain: A technique developed by Stanford bioengineers could lead to new treatments

The mice in Scott Delp’s lab, unlike their human counterparts, can get pain relief from the glow of a yellow light.

Right now these mice are helping scientists to study pain – how and why it occurs and why some people feel it so intensely without any obvious injury. But Delp, a professor of bioengineering and mechanical engineering, hopes one day the work he does with these mice can also help people who are in chronic, debilitating pain.

"This is an entirely new approach to study a huge public health issue," Delp said. "It’s a completely new tool that is now available to neuroscientists everywhere." He is the senior author of a research paper published Feb. 16 in Nature Biotechnology.

A switch for pain

The mice are modified with gene therapy to have pain-sensing nerves that can be controlled by light. One color of light makes the mice more sensitive to pain. Another reduces pain. The scientists shone a light on the paws of mice through the Plexiglas bottom of the cage.

Graduate students Shrivats Iyer and Kate Montgomery, who led the study, say it opens the door to future experiments to understand the nature of pain and also touch and other sensations that are part of our daily lives but little understood.

"The fact that we can give a mouse an injection and two weeks later shine a light on its paw to change the way it senses pain is very powerful," Iyer said.

For example, increasing or decreasing the sensation of pain in these mice could help scientists understand why pain seems to continue in people after an injury has healed. Does persistent pain change those nerves in some way? If so, how can they be changed back to a state where, in the absence of an injury, they stop sending searing messages of pain to the brain?

Leaders at the National Institutes of Health agree that the work could have important implications for treating pain. “This powerful approach shows great potential for helping the millions who suffer pain from nerve damage,” said Linda Porter, the pain policy adviser at the National Institute of Neurological Disorders and Stroke and a leader of the NIH’s Pain Consortium.

"Now, with a flick of a switch, scientists may be able to rapidly test new pain-relieving medications and, one day, doctors may be able to use light to relieve pain," she said.

Accidental discovery

The researchers took advantage of a technique called optogenetics, which involves light-sensitive proteins called opsins that are inserted into the nerves. Optogenetics was developed by Delp’s colleague Karl Deisseroth, a co-author of the journal article. He has used the technique as a way of activating precise regions of the brain to better understand how the brain functions. Deisseroth is a professor of bioengineering, psychiatry and behavioral sciences.

Delp, who has an interest in muscles and movement, saw the potential for using optogenetics not just for studying the brain – interesting though those studies may be – but also for studying the many nerves outside the brain. These are the nerves that control movement, pain, touch and other sensations throughout our body, and that are involved in diseases such as amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease.

A few years ago Stanford Bio-X, which encourages interdisciplinary projects such as this one, supported Delp and Deisseroth in their efforts to use optogenetics to control the nerves that excite muscles. In the process of doing that work, Delp said, his student at the time, Michael Llewellyn, occasionally found that he had placed the opsins into nerves that signal pain rather than those that control muscle.

That accident sparked a new line of research. Delp said, “We thought, ‘Wow, we’re getting pain neurons; that could be really important.’” He suggested that Montgomery and Iyer focus on those pain nerves that had been a byproduct of the muscle work.

A faster approach

A key component of the work was a new approach to quickly incorporate opsins into the nerves of mice. The researchers started with a virus that had been engineered to contain the DNA that produces the opsin. Then they injected those modified viruses directly into mouse nerves. Weeks later, only the nerves that control pain had incorporated the opsin proteins and would fire, or be less likely to fire, in response to different colors of light.

The speed of the viral approach makes it very flexible, both for this pain work and for future studies. Researchers are developing newer forms of opsins with different properties, such as responding to different colors of light. “Because we used a viral approach we could, in the future, quickly turn around and use newer opsins,” said Montgomery, who is a Stanford Bio-X fellow.

This entire project, which spans bioengineering, neuroscience and psychiatry, is one Delp says could never have happened without the environment at Stanford that supports collaboration across departments. The pain portion of the research came out of support from NeuroVentures, which was a project incubated within Bio-X to support the intersection of neuroscience and engineering or other disciplines. That project was so successful it has spun off into the Stanford Neurosciences Institute, of which Delp is now a deputy director.

Delp said that many challenges must be met before results of these experiments – either new drugs based on what they learn, or optogenetics directly – could become available to people but that he always has that as a goal.

"Developing a new therapy from the ground up would be incredibly rewarding," he said. "Most people don’t get to do that in their careers."

Delp and Deisseroth have started a company called Circuit Therapeutics to develop therapies based on optogenetics.

Filed under optogenetics opsins pain neuroscience science

94 notes

Study identifies new drug target for chronic, touch-evoked pain

Researchers at the School of Medicine have identified a subset of nerve cells that mediates a form of chronic, touch-evoked pain called tactile allodynia, a condition that is resistant to conventional pain medication.

The discovery could point researchers to more fruitful efforts to develop effective drugs for the condition.

Touch-evoked pain occurs as part of a larger neuropathic pain condition arising from damage or disruption of nerve-cell circuits or signals caused by disorders such as alcoholism, diabetes, shingles and AIDS, or procedures such as spine surgery and chemotherapy. For patients with tactile allodynia, the slightest touch — a gentle caress or the brush of shirt against skin — can cause excruciating pain because changes in nerve-cell signals or networks trick the brain into mistaking touch for pain.

The study, published online Feb. 27 in Neuron, found that these “touch” neurons are different from the usual “pain” neurons that respond to stimuli such as cuts or bruises.

Unlike pain caused by such wounds, neuropathic pain is difficult to manage because little can be done to repair nerve damage. Managing it may require strong painkillers or combinations of treatments.

Common painkillers such as morphine have little effect on touch-evoked pain, possibly because they don’t target the touch neurons, the authors say. Morphine binds to specific protein-binding sites on pain neurons called mu opioid receptors, or MORs, and cuts off the their signals so that the brain can no longer sense pain.

However, the touch neurons do not carry MORs, which is why morphine cannot bind to them and block the pain. Instead, they carry delta opioid receptors, or DORs, whose role in pain control has been unclear until recently.

"That’s been the problem so far; any type of severe pain you have, you go into the clinic and very likely you will be treated with morphine-like opioids," said Gregory Scherrer, PharmD, PhD, the senior author of the study and an assistant professor of anesthesia. "You can give some of these patients as much morphine as you want; it won’t work if the mu opioid receptor is not present on the neurons that underlie that type of pain."

There are currently no Food and Drug Administration-approved pain-control drugs that target DORs. Previous attempts at developing DOR-targeting drugs haven’t succeeded because researchers didn’t know what type of pain such drugs would be useful for, Scherrer said.

Two DOR-binding drugs developed for knee pain by Adolor Corp., a biotechnology firm, for instance, probably failed because there is no compelling evidence that DOR was present or involved. AstraZeneca, another pharmaceutical firm, also had a DOR program but recently stopped its research efforts, Scherrer added.

"Now that we have provided a rationale and mechanism supporting the utility of DOR agonists for cutaneous pain and tactile allodynia, these companies will be able to design trials more carefully to evaluate specifically the drugs’ efficacy against touch-evoked pain," he said.

Earlier studies by Scherrer and others hinted at the presence of special nerve fibers on the skin that might contribute to touch-evoked pain.

In the current study, Scherrer and colleagues used fluorescent mouse models to isolate these neurons and identify how they control touch-evoked pain. They found that DOR can play an inhibitory role in these neurons: When proteins bind to DOR, they cut off communication to the spinal cord, through which sensory signals travel to the brain.
DOR-carrying “touch” neurons pervade the skin and could easily be targeted by drugs in the form of skin patches or topical creams, Scherrer suggested.

"By contrast, most MOR-carrying neurons penetrate internal organs," he said. "That’s why morphine is effective in treating post-surgery pain, for example."

Scherrer and fellow researchers tested two different DOR-binding compounds individually on mice and found that both reduced the mice’s sensitivity to touch-evoked pain.

Preliminary studies also indicate that DOR-targeting drugs might not cause dramatic side effects like morphine does, especially if they can be used topically, Scherrer said.

"Morphine and other MOR-targeting drugs have myriad deleterious side effects — including addiction, respiratory depression, constipation, nausea and vomiting — that further limits their utility for chronic pain management," he said.

The next step is to determine whether DOR could be a target for other types of pain, such as arthritis pain, pain from bone cancer and muscle pain, Scherrer added.

The findings also suggest that the body’s opioid system — normally associated with pain and addiction — may also respond to other stimuli such as touch.

"We may have underestimated the importance of the opioid system and what can be achieved with drugs targeting other subtypes of opioid receptors," Scherrer said.

(Source: med.stanford.edu)

Filed under tactile allodynia pain neuropathic pain opioid receptors morphine neuroscience science

106 notes

Study debunks alcohol consumption assertions
ALCOHOL consumption is not a direct cause of cognitive impairment in older men later in life, a study conducted by the University of Western Australia has found. 
The study, published in the Journal of Neurology, used Mendelian randomisation to analyse the genetic data from 3,542 men between the ages of 65 and 83 years. 
The scientists measured the participants’ cognitive function three to eight years after recording their alcohol consumption. 
Lead author, Western Australian Centre for Health and Ageing Director and UWA Professor Osvaldo Almeida says the team investigated the triangular association between alcohol consumption, cognitive impairment and a genetic polymorphism that modulates the efficiency of a critical enzyme of alcohol metabolism. 
“We found a genetic variation that increases absenteeism and decreases the total amount of alcohol consumed,” Prof Almeida says.
“If alcohol were a cause of cognitive impairment, one would expect that this genetic variation would be associated with lower risk of cognitive impairment in later life [because people with this genetic variation drink less or not at all]. 
“That was not the case. Hence, we concluded that the association between alcohol use and cognitive impairment is not due to a direct effect of alcohol.”
The study also presented results that are consistent with the possibility, but do not necessarily prove, that regular moderate drinking decreases the risk of cognitive impairment in older men.
Prof Almeida says the reasons for these results were unclear.
“But evidence from a randomised trial looking at the effect of the Mediterranean diet [which includes nuts, olive oil, vegetables and wine] on health outcomes is supportive of this hypothesis,” he says. 
“One may argue that people who drink in moderation have a lifestyle where, in general, things are done in moderation. 
“This approach to life may decrease health hazards in general.”
Prof Almeida says that although the results didn’t show alcohol affecting cognitive impairment, other studies have found excessive alcohol use to be associated with worse physical health, widowhood and poor social support. 
“[These studies] led to the assumption that alcohol must directly damage the brain and cause cognitive impairment,” he says. 
“This study shows that such an assumption is wrong. 
“It also suggests that alcohol may have a small protective effect that we need to understand better in order to develop new interventions that might contribute to prevent dementia without all the bad outcomes associated with alcohol.”

Study debunks alcohol consumption assertions

ALCOHOL consumption is not a direct cause of cognitive impairment in older men later in life, a study conducted by the University of Western Australia has found. 

The study, published in the Journal of Neurology, used Mendelian randomisation to analyse the genetic data from 3,542 men between the ages of 65 and 83 years. 

The scientists measured the participants’ cognitive function three to eight years after recording their alcohol consumption. 

Lead author, Western Australian Centre for Health and Ageing Director and UWA Professor Osvaldo Almeida says the team investigated the triangular association between alcohol consumption, cognitive impairment and a genetic polymorphism that modulates the efficiency of a critical enzyme of alcohol metabolism. 

“We found a genetic variation that increases absenteeism and decreases the total amount of alcohol consumed,” Prof Almeida says.

“If alcohol were a cause of cognitive impairment, one would expect that this genetic variation would be associated with lower risk of cognitive impairment in later life [because people with this genetic variation drink less or not at all]. 

“That was not the case. Hence, we concluded that the association between alcohol use and cognitive impairment is not due to a direct effect of alcohol.”

The study also presented results that are consistent with the possibility, but do not necessarily prove, that regular moderate drinking decreases the risk of cognitive impairment in older men.

Prof Almeida says the reasons for these results were unclear.

“But evidence from a randomised trial looking at the effect of the Mediterranean diet [which includes nuts, olive oil, vegetables and wine] on health outcomes is supportive of this hypothesis,” he says. 

“One may argue that people who drink in moderation have a lifestyle where, in general, things are done in moderation. 

“This approach to life may decrease health hazards in general.”

Prof Almeida says that although the results didn’t show alcohol affecting cognitive impairment, other studies have found excessive alcohol use to be associated with worse physical health, widowhood and poor social support. 

“[These studies] led to the assumption that alcohol must directly damage the brain and cause cognitive impairment,” he says. 

“This study shows that such an assumption is wrong. 

“It also suggests that alcohol may have a small protective effect that we need to understand better in order to develop new interventions that might contribute to prevent dementia without all the bad outcomes associated with alcohol.”

Filed under alcohol consumption alcohol cognitive impairment genetic polymorphism neuroscience science

204 notes

Blood Test Identifies Those At-Risk for Cognitive Decline, Alzheimer’s Within 3 Years
Researchers have discovered and validated a blood test that can predict with greater than 90 percent accuracy if a healthy person will develop mild cognitive impairment or Alzheimer’s disease within three years.
Described in the April issue of Nature Medicine, the study heralds the potential for developing treatment strategies for Alzheimer’s at an earlier stage, when therapy would be more effective at slowing or preventing onset of symptoms. It is the first known published report of blood-based biomarkers for preclinical Alzheimer’s.
The test identifies 10 lipids, or fats, in the blood that predict disease onset. It could be ready for use in clinical studies in as few as two years and, researchers say, other diagnostic uses are possible.
“Our novel blood test offers the potential to identify people at risk for progressive cognitive decline and can change how patients, their families and treating physicians plan for and manage the disorder,” says the study’s corresponding author Howard J. Federoff, MD, PhD, professor of neurology and executive vice president for health sciences at Georgetown University Medical Center.
There is no cure or effective treatment for Alzheimer’s. Worldwide, about 35.6 million individuals have the disease and, according to the World Health Organization, the number will double every 20 years to 115.4 million people with Alzheimer’s by 2050.
Federoff explains there have been many efforts to develop drugs to slow or reverse the progression of Alzheimer’s disease, but all of them have failed. He says one reason may be the drugs were evaluated too late in the disease process.
“The preclinical state of the disease offers a window of opportunity for timely disease-modifying intervention,” Federoff says. “Biomarkers such as ours that define this asymptomatic period are critical for successful development and application of these therapeutics.”
The study included 525 healthy participants aged 70 and older who gave blood samples upon enrolling and at various points in the study. Over the course of the five-year study, 74 participants met the criteria for either mild Alzheimer’s disease (AD) or a condition known as amnestic mild cognitive impairment (aMCI), in which memory loss is prominent. Of these, 46 were diagnosed upon enrollment and 28 developed aMCI or mild AD during the study (the latter group called converters).
In the study’s third year, the researchers selected 53 participants who developed aMCI/AD (including 18 converters) and 53 cognitively normal matched controls for the lipid biomarker discovery phase of the study. The lipids were not targeted before the start of the study, but rather, were an outcome of the study.
A panel of 10 lipids was discovered, which researchers say appears to reveal the breakdown of neural cell membranes in participants who develop symptoms of cognitive impairment or AD. The panel was subsequently validated using the remaining 21 aMCI/AD participants (including 10 converters), and 20 controls. Blinded data were analyzed to determine if the subjects could be characterized into the correct diagnostic categories based solely on the 10 lipids identified in the discovery phase.
“The lipid panel was able to distinguish with 90 percent accuracy these two distinct groups: cognitively normal participants who would progress to MCI or AD within two to three years, and those who would remain normal in the near future,” Federoff says.
The researchers examined if the presence of the APOE4 gene, a known risk factor for developing AD, would contribute to accurate classification of the groups, but found it was not a significant predictive factor in this study.
“We consider our results a major step toward the commercialization of a preclinical disease biomarker test that could be useful for large-scale screening to identify at-risk individuals,” Federoff says. “We’re designing a clinical trial where we’ll use this panel to identify people at high risk for Alzheimer’s to test a therapeutic agent that might delay or prevent the emergence of the disease.”

Blood Test Identifies Those At-Risk for Cognitive Decline, Alzheimer’s Within 3 Years

Researchers have discovered and validated a blood test that can predict with greater than 90 percent accuracy if a healthy person will develop mild cognitive impairment or Alzheimer’s disease within three years.

Described in the April issue of Nature Medicine, the study heralds the potential for developing treatment strategies for Alzheimer’s at an earlier stage, when therapy would be more effective at slowing or preventing onset of symptoms. It is the first known published report of blood-based biomarkers for preclinical Alzheimer’s.

The test identifies 10 lipids, or fats, in the blood that predict disease onset. It could be ready for use in clinical studies in as few as two years and, researchers say, other diagnostic uses are possible.

“Our novel blood test offers the potential to identify people at risk for progressive cognitive decline and can change how patients, their families and treating physicians plan for and manage the disorder,” says the study’s corresponding author Howard J. Federoff, MD, PhD, professor of neurology and executive vice president for health sciences at Georgetown University Medical Center.

There is no cure or effective treatment for Alzheimer’s. Worldwide, about 35.6 million individuals have the disease and, according to the World Health Organization, the number will double every 20 years to 115.4 million people with Alzheimer’s by 2050.

Federoff explains there have been many efforts to develop drugs to slow or reverse the progression of Alzheimer’s disease, but all of them have failed. He says one reason may be the drugs were evaluated too late in the disease process.

“The preclinical state of the disease offers a window of opportunity for timely disease-modifying intervention,” Federoff says. “Biomarkers such as ours that define this asymptomatic period are critical for successful development and application of these therapeutics.”

The study included 525 healthy participants aged 70 and older who gave blood samples upon enrolling and at various points in the study. Over the course of the five-year study, 74 participants met the criteria for either mild Alzheimer’s disease (AD) or a condition known as amnestic mild cognitive impairment (aMCI), in which memory loss is prominent. Of these, 46 were diagnosed upon enrollment and 28 developed aMCI or mild AD during the study (the latter group called converters).

In the study’s third year, the researchers selected 53 participants who developed aMCI/AD (including 18 converters) and 53 cognitively normal matched controls for the lipid biomarker discovery phase of the study. The lipids were not targeted before the start of the study, but rather, were an outcome of the study.

A panel of 10 lipids was discovered, which researchers say appears to reveal the breakdown of neural cell membranes in participants who develop symptoms of cognitive impairment or AD. The panel was subsequently validated using the remaining 21 aMCI/AD participants (including 10 converters), and 20 controls. Blinded data were analyzed to determine if the subjects could be characterized into the correct diagnostic categories based solely on the 10 lipids identified in the discovery phase.

“The lipid panel was able to distinguish with 90 percent accuracy these two distinct groups: cognitively normal participants who would progress to MCI or AD within two to three years, and those who would remain normal in the near future,” Federoff says.

The researchers examined if the presence of the APOE4 gene, a known risk factor for developing AD, would contribute to accurate classification of the groups, but found it was not a significant predictive factor in this study.

“We consider our results a major step toward the commercialization of a preclinical disease biomarker test that could be useful for large-scale screening to identify at-risk individuals,” Federoff says. “We’re designing a clinical trial where we’ll use this panel to identify people at high risk for Alzheimer’s to test a therapeutic agent that might delay or prevent the emergence of the disease.”

Filed under alzheimer's disease neurodegeneration memory cognitive decline blood test neuroscience medicine science

139 notes

Protein reelin rescues cognitive impairment in animal models of Alzheimer’s disease
The scientists Eduardo Soriano and Lluís Pujadas, from the University of Barcelona (UB), and the “Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas” (CIBERNED) have led research into the role of reelin in animal models of Alzheimer’s disease.
Published today in the journal Nature Communications, the study demonstrates how an increase in the levels of reelin—a protein that is essential for cerebral cortex plasticity—has the capacity to restore cognitive capacity in mouse models of Alzheimer’s disease, delaying amyloid-beta (Αβ) fibril formation in vitro and reducing the accumulation of amyloid deposits in the brains of animals affected by this disease.
The study, which was started four years ago, has involved the collaboration of members of the Peptides and Proteins lab at the Institute for Research in Biomedicine (IRB), namely Bernat Serra-Vidal, PhD student, Ernest Giralt, group leader, and Natàlia Carulla, associate researcher whose investigation focuses on the aggregation of Αβ. Alzheimer’s disease, which affects approximately 500,000 people in Spain, is characterised by the loss of neural connections and by neuronal death, both associated mainly with the formation of senile plaques (extracellular deposits of Aβ) and the presence of neurofibrillary tangles (intracellular deposits of tau protein.
In the IRB lab, researchers have performed experiments in vitro to determine whether there is an interaction between Aβ aggregation and reelin. These assays have revealed that reelin interacts with the Aβ peptide, delaying the formation of Aβ fibrils until it is trapped within them. “When reelins becomes trapped in Aβ fibrils, it loses its capacity to strengthen synaptic plasticity. This explains why an increase in reelin expression in the brain may be beneficial,” explain the authors of the study.
The hypotheses from the work in vitro have been tested in vivo using experimental animals. This study is the first to demonstrate a neuroprotective effect of reelin in neurodegenerative disease and, in addition, offers a possible explanation for this protective role.

Protein reelin rescues cognitive impairment in animal models of Alzheimer’s disease

The scientists Eduardo Soriano and Lluís Pujadas, from the University of Barcelona (UB), and the “Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas” (CIBERNED) have led research into the role of reelin in animal models of Alzheimer’s disease.

Published today in the journal Nature Communications, the study demonstrates how an increase in the levels of reelin—a protein that is essential for cerebral cortex plasticity—has the capacity to restore cognitive capacity in mouse models of Alzheimer’s disease, delaying amyloid-beta (Αβ) fibril formation in vitro and reducing the accumulation of amyloid deposits in the brains of animals affected by this disease.

The study, which was started four years ago, has involved the collaboration of members of the Peptides and Proteins lab at the Institute for Research in Biomedicine (IRB), namely Bernat Serra-Vidal, PhD student, Ernest Giralt, group leader, and Natàlia Carulla, associate researcher whose investigation focuses on the aggregation of Αβ. Alzheimer’s disease, which affects approximately 500,000 people in Spain, is characterised by the loss of neural connections and by neuronal death, both associated mainly with the formation of senile plaques (extracellular deposits of Aβ) and the presence of neurofibrillary tangles (intracellular deposits of tau protein.

In the IRB lab, researchers have performed experiments in vitro to determine whether there is an interaction between Aβ aggregation and reelin. These assays have revealed that reelin interacts with the Aβ peptide, delaying the formation of Aβ fibrils until it is trapped within them. “When reelins becomes trapped in Aβ fibrils, it loses its capacity to strengthen synaptic plasticity. This explains why an increase in reelin expression in the brain may be beneficial,” explain the authors of the study.

The hypotheses from the work in vitro have been tested in vivo using experimental animals. This study is the first to demonstrate a neuroprotective effect of reelin in neurodegenerative disease and, in addition, offers a possible explanation for this protective role.

Filed under alzheimer's disease animal model cognitive impairment reelin beta amyloid neuroscience science

137 notes

Ever-So-Slight Delay Improves Decision-Making Accuracy
Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.
“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.
The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”
In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”
The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.
Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.
The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.
The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.
“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”
The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.
Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”
“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Ever-So-Slight Delay Improves Decision-Making Accuracy

Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.

“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.

The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”

In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”

The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.

Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.

The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.

The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.

“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”

The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.

Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”

“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Filed under decision making attention cognition psychology neuroscience science

366 notes

Inherited Alzheimer’s damage greater decades before symptoms appear



The progression of Alzheimer’s may slow once symptoms appear and do significant damage, according to a study investigating an inherited form of the disease.



In a paper published in the prestigious journal Science Translational Medicine, Professor Colin Masters from the Florey Institute of Neuroscience and Mental Health and University of Melbourne – and colleagues in the UK and US – have found rapid neuronal damage begins 10 to 20 years before symptoms appear.
“As part of this research we have observed other changes in the brain that occur when symptoms begin to appear. There is actually a slowing of the neurodegeneration,” said Professor Masters.Autosomal-dominant Alzheimer’s affects families with a genetic mutation, predisposing them to the crippling disease. These families provide crucial insight into the development of Alzheimer’s because they can be identified years before symptoms develop. The information gleaned from this group will also influence treatment offered to those living with the more common age-related version. Only about one per cent of those with Alzheimer’s have the genetic type of the disease.
The next part of the study involves a clinical trial. Using a range of imaging techniques (MRI and PET) and analysis of blood and cerebrospinal fluid, individuals from the US, UK and Australia will be observed as they trial new drugs to test their safety, side effects and changes within the brain.
 “As part of an international study, family members are invited to be part of a trial in which two experimental drugs are offered many years before symptoms appear,” Prof Masters says. “It’s going to be very interesting to see how clinical intervention affects this group of patients in the decades before symptoms appear.”
The Florey is looking to recruit more participants in the Dominantly Inherited Alzheimer Network (DIAN) study. Those who either know they have a genetic mutation that causes autosomal-dominant Alzheimer’s or who don’t know their genetic status but have a parent or sibling with the mutation are invited to email: dian@florey.edu.au

Inherited Alzheimer’s damage greater decades before symptoms appear

The progression of Alzheimer’s may slow once symptoms appear and do significant damage, according to a study investigating an inherited form of the disease.

In a paper published in the prestigious journal Science Translational Medicine, Professor Colin Masters from the Florey Institute of Neuroscience and Mental Health and University of Melbourne – and colleagues in the UK and US – have found rapid neuronal damage begins 10 to 20 years before symptoms appear.

“As part of this research we have observed other changes in the brain that occur when symptoms begin to appear. There is actually a slowing of the neurodegeneration,” said Professor Masters.
Autosomal-dominant Alzheimer’s affects families with a genetic mutation, predisposing them to the crippling disease. These families provide crucial insight into the development of Alzheimer’s because they can be identified years before symptoms develop. The information gleaned from this group will also influence treatment offered to those living with the more common age-related version. Only about one per cent of those with Alzheimer’s have the genetic type of the disease.

The next part of the study involves a clinical trial. Using a range of imaging techniques (MRI and PET) and analysis of blood and cerebrospinal fluid, individuals from the US, UK and Australia will be observed as they trial new drugs to test their safety, side effects and changes within the brain.

 “As part of an international study, family members are invited to be part of a trial in which two experimental drugs are offered many years before symptoms appear,” Prof Masters says. “It’s going to be very interesting to see how clinical intervention affects this group of patients in the decades before symptoms appear.”

The Florey is looking to recruit more participants in the Dominantly Inherited Alzheimer Network (DIAN) study. Those who either know they have a genetic mutation that causes autosomal-dominant Alzheimer’s or who don’t know their genetic status but have a parent or sibling with the mutation are invited to email: dian@florey.edu.au

Filed under alzheimer's disease neurodegeneration neuroimaging neuroscience science

free counters