Neuroscience

Articles and news from the latest research reports.

Posts tagged science

84 notes

Hydrocephalus: sensors monitor cerebral pressure
Urinary incontinence, a shuffling gait, and deteriorating reasoning skills are all indicators pointing to a Parkinsonian or Alzheimer type disease. An equally plausible explanation is hydrocephalus, commonly known as “water on the brain.” With this diagnosis, the brain produces either too much cerebral fluid, or it cannot “drain off” these fluids with adequate sufficiency. The consequence: Pressure in the brain rises sharply, resulting in damage. A shunt system – a kind of silicon tube that physicians implant into the patient’s brain, provides relief. It draws off superfluous fluid from there, for example, into the abdominal cavity. The heart of this shunt system is a valve: If the pressure increases above a threshold value, then the valve opens; if it declines again, then the valve closes.
In rare cases, over-drainage may occur. The cerebral pressure lowers too much, the cerebral ventricles are virtually squeezed out. Until now, physicians could only detect and verify such over-drainage through elaborate and costly computer and magnetic resonance tomography.
Cerebral pressure measurable anytime
With a new kind of sensor, things are different: If it is implanted into the patient’s brain with the shunt system, the physicians could read out brain pressure using a hand-held meter: within seconds, anytime and without complex investigation. Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS in Duisburg, working jointly with Christoph Miethke GmbH and Aesculap AG, engineered these sensors.
If the patient complains of discomfort, then the physician merely needs to place the hand- held meter outside, on the patient’s head. The device sends magnetic radio waves and supplies the sensor in the shunt with power- the implant is “awakened,” measures tem- perature and pressure in the cerebral fluid, and transmits these data back to the handheld device. If the pressure on the outside of the desired area, the physician can set the valve on the shunt system from the outside as needed, and individually adjusted to the patient. “The sensor is an active implant, which also takes over measurement functions, in contrast to a stent or a tooth implant,” says Michael Görtz, head of pressure sensor technology at IMS.
The implant must be biocompatible; the body cannot reject it. Researchers had to ensure that the body also would not attack the implant. “The defense response behaves just like an aggressive medium, that would even dilute the silicon of the electronics over the course of time,” explains Görtz. Miethke therefore completely encases the implant into a thin metal casing. “We can still supply it with power from the outside through the metal casing, measure cerebral pressure through the housing and transmit the recorded data outside, through the metal to the reader,” Görtz explains. To do so, the correct metal had to be found. The coating may not be thicker than the walls of a soft drink can – in other words, much thinner than one millimeter. The researchers even developed the handheld reading device, together with the electronics, through which it communicates with the sensor.
The sensor is ready for serial production, and was already approved by Miethke. The company has already initiated the market launch of the system. “The sensor sets the basis for the further development through to theranostic implants – a neologism derived from the words “therapy” and “diagnostic.” In a few years, the sensor could then not only record cerebral pressure and develop a diagnosis on the basis of this, but also properly adjust the pressure independently, immediately on its own and thus, take over the therapy process,” says Görtz.

Hydrocephalus: sensors monitor cerebral pressure

Urinary incontinence, a shuffling gait, and deteriorating reasoning skills are all indicators pointing to a Parkinsonian or Alzheimer type disease. An equally plausible explanation is hydrocephalus, commonly known as “water on the brain.” With this diagnosis, the brain produces either too much cerebral fluid, or it cannot “drain off” these fluids with adequate sufficiency. The consequence: Pressure in the brain rises sharply, resulting in damage. A shunt system – a kind of silicon tube that physicians implant into the patient’s brain, provides relief. It draws off superfluous fluid from there, for example, into the abdominal cavity. The heart of this shunt system is a valve: If the pressure increases above a threshold value, then the valve opens; if it declines again, then the valve closes.

In rare cases, over-drainage may occur. The cerebral pressure lowers too much, the cerebral ventricles are virtually squeezed out. Until now, physicians could only detect and verify such over-drainage through elaborate and costly computer and magnetic resonance tomography.

Cerebral pressure measurable anytime

With a new kind of sensor, things are different: If it is implanted into the patient’s brain with the shunt system, the physicians could read out brain pressure using a hand-held meter: within seconds, anytime and without complex investigation. Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS in Duisburg, working jointly with Christoph Miethke GmbH and Aesculap AG, engineered these sensors.

If the patient complains of discomfort, then the physician merely needs to place the hand- held meter outside, on the patient’s head. The device sends magnetic radio waves and supplies the sensor in the shunt with power- the implant is “awakened,” measures tem- perature and pressure in the cerebral fluid, and transmits these data back to the handheld device. If the pressure on the outside of the desired area, the physician can set the valve on the shunt system from the outside as needed, and individually adjusted to the patient. “The sensor is an active implant, which also takes over measurement functions, in contrast to a stent or a tooth implant,” says Michael Görtz, head of pressure sensor technology at IMS.

The implant must be biocompatible; the body cannot reject it. Researchers had to ensure that the body also would not attack the implant. “The defense response behaves just like an aggressive medium, that would even dilute the silicon of the electronics over the course of time,” explains Görtz. Miethke therefore completely encases the implant into a thin metal casing. “We can still supply it with power from the outside through the metal casing, measure cerebral pressure through the housing and transmit the recorded data outside, through the metal to the reader,” Görtz explains. To do so, the correct metal had to be found. The coating may not be thicker than the walls of a soft drink can – in other words, much thinner than one millimeter. The researchers even developed the handheld reading device, together with the electronics, through which it communicates with the sensor.

The sensor is ready for serial production, and was already approved by Miethke. The company has already initiated the market launch of the system. “The sensor sets the basis for the further development through to theranostic implants – a neologism derived from the words “therapy” and “diagnostic.” In a few years, the sensor could then not only record cerebral pressure and develop a diagnosis on the basis of this, but also properly adjust the pressure independently, immediately on its own and thus, take over the therapy process,” says Görtz.

Filed under hydrocephalus sensor implant cerebral pressure neuroscience science

116 notes

Cleveland Clinic identifies mechanism in Alzheimer’s-related memory loss

Cleveland Clinic researchers have identified a protein in the brain that plays a critical role in the memory loss seen in Alzheimer’s patients, according to a study to be published in the journal Nature Neuroscience and posted online today.

The protein – Neuroligin-1 (NLGN1) – is known to be involved in memory formation; this is the first time it’s been linked to amyloid-associated memory loss.

In Alzheimer’s disease, amyloid beta proteins accumulate in the brains of Alzheimer’s patients and induce inflammation. This inflammation leads to certain gene modifications that interrupt the functioning of synapses in the brain, leading to memory loss.

Using animal models, Cleveland Clinic researchers have discovered that during this neuroinflammatory process, the epigenetic modification of NLGN1 disrupts the synaptic network in the brain, which is responsible for developing and maintaining memories. Destroying this network can lead to the type of memory loss seen in Alzheimer’s patients.

"Alzheimer’s is a challenging disease that researchers have been approaching from all angles," said Mohamed Naguib, M.D., the Cleveland Clinic physician who lead the study. "This discovery could provide us with a new approach for preventing and treating Alzheimer’s disease."

Previous studies from this group of researchers have also identified a novel compound called MDA7, which can potentially stop the neuroinflammatory process that leads to the modification of NLGN1. Treatment with the compound restored cognition, memory and synaptic plasticity – a key neurological foundation of learning and memory – in an animal model. Significant preliminary work for the first-in-man study has been completed for MDA7 including in-vitro studies and preliminary clinical toxicology and pharmacokinetic work. The Cleveland Clinic plans to initiate Phase I human studies on the safety of this class of compounds in the near future.

Alzheimer’s disease is an irreversible, fatal brain disease that slowly destroys memory and thinking skills. About 5 million people in the United States have Alzheimer’s disease. With the aging of the population, and without successful treatment, there will be 16 million Americans and 106 million people worldwide with Alzheimer’s by 2050, according to the 2011 Alzheimer’s Disease Facts and Figures report from the Alzheimer’s Association.

(Source: eurekalert.org)

Filed under alzheimer's disease memory loss memory formation beta amyloid neuroligin 1 neuroscience science

281 notes

Drugs that weaken traumatic memories hold promise for PTSD treatment

Memories of traumatic events often last a lifetime because they are so difficult to treat through behavioral approaches. A preclinical study in mice published by Cell Press January 16th in the journal Cell reveals that drugs known as histone deacetylase inhibitors (HDACis) can enhance the brain’s ability to permanently replace old traumatic memories with new memories, opening promising avenues for the treatment of posttraumatic stress disorder (PTSD) and other anxiety disorders.

image

Caption: Metabolic activity (green and red colors) in the hippocampus (white dotted line) of animals that underwent extinction training in combination with HDACis (right) is significantly higher than in animals that underwent extinction training alone (left). Metabolic activity serves to estimate the learning capacity of an animal. Credit: Cell, Gräff et al.

"Psychotherapy is often used for treating PTSD, but it doesn’t always work, especially when the traumatic events occurred many years earlier," says senior study author Li-Huei Tsai of the Massachusetts Institute of Technology. "This study provides a mechanism explaining why old memories are difficult to extinguish and shows that HDACis can facilitate psychotherapy to treat anxiety disorders such as PTSD."

One common treatment for anxiety disorders is exposure-based therapy, which involves exposing patients to fear-evoking thoughts or events in a safe environment. This process reactivates the traumatic memory, opening a short time window during which the original memory can be disrupted and replaced with new memories. Exposure-based therapy is effective when the traumatic events occurred recently, but until now, it was not clear whether it would also be effective for older traumatic memories.

To address this question, Tsai and her team used a protocol for studying fear responses associated with traumatic memories. In the first phase, the researchers exposed mice to a tone followed by an electrical footshock. Once the mice learned to associate these two events, they began to freeze in fear upon hearing the tone by itself, even when they did not receive a shock. Using an extinction protocol, which is similar to exposure-based therapy, the researchers repeatedly presented the tone without the shock to test whether the mice could unlearn the association between these two events and would stop freezing in response to the tone. The extinction protocol was successful for mice that were exposed to the tone-shock pairing just one day earlier, but it was not effective for mice that originally formed the traumatic memory one month earlier. The researchers hypothesized that epigenetic modification of genes involved in learning and memory might be responsible for the diminished response of treatment for older memories.

The researchers tested whether HDACis, which promote long-lasting activation of genes involved in learning and memory, could help replace old traumatic memories with new memories. Mice previously exposed to the tone-shock pairing received HDACis and then underwent the extinction protocol. These mice learned to stop freezing in response to the tone, even when they originally formed the traumatic memory one month earlier. “Collectively, our findings suggest that exposure-based therapy alone does not effectively weaken traumatic memories that were formed a long time ago, but that HDACis can be combined with exposure-based therapy to substantially improve treatment for the most enduring traumatic memories,” Tsai says.

(Source: eurekalert.org)

Filed under PTSD histone deacetylase inhibitors anxiety disorders traumatic memories psychology neuroscience science

152 notes

Breakthrough in Understanding the Secret Life of Prion Molecules

New research from David Westaway, PhD, of the University of Alberta and Jiri Safar, MD, Case Western Reserve University School of Medicine has uncovered a quality control mechanism in brain cells that may help keep deadly neurological diseases in check for months or years.

image

Image credit: STEVE GSCHMEISSNER / SPL

The findings, published in The Journal of Clinical Investigation, “present a breakthrough in understanding the secret life of prion molecules in the brain and may offer a new way to treat prion diseases,” said Westaway, Director of the Centre for Prions and Protein Folding Diseases and Professor of Neurology in the Faculty of Medicine and Dentistry at the University of Alberta.

Read more

Filed under prion disease neurodegenerative diseases creutzfeldt-jakob disease chronic wasting disease medicine science

87 notes

Fighting Flies
When one encounters a group of fruit flies invading their kitchen, it probably appears as if the whole group is vying for a sweet treat. But a closer look would likely reveal the male flies in the group are putting up more of a fight, particularly if ripe fruit or female flies are present. According to the latest studies from the fly laboratory of California Institute of Technology (Caltech) biologist David Anderson, male Drosophilae, commonly known as fruit flies, fight more than their female counterparts because they have special cells in their brains that promote fighting. These cells appear to be absent in the brains of female fruit flies.  
"The sex-specific cells that we identified exert their effects on fighting by releasing a particular type of neuropeptide, or hormone, that has also been implicated in aggression in mammals including mouse and rat," says Anderson, the Seymour Benzer Professor of Biology at Caltech, and corresponding author of the study. "In addition, there are some recent papers implicating increased levels of this hormone in people with personality disorders that lead to higher levels of aggression."
The team’s findings are outlined in the January 16 version of the journal Cell.
Read more

Fighting Flies

When one encounters a group of fruit flies invading their kitchen, it probably appears as if the whole group is vying for a sweet treat. But a closer look would likely reveal the male flies in the group are putting up more of a fight, particularly if ripe fruit or female flies are present. According to the latest studies from the fly laboratory of California Institute of Technology (Caltech) biologist David Anderson, male Drosophilae, commonly known as fruit flies, fight more than their female counterparts because they have special cells in their brains that promote fighting. These cells appear to be absent in the brains of female fruit flies.  

"The sex-specific cells that we identified exert their effects on fighting by releasing a particular type of neuropeptide, or hormone, that has also been implicated in aggression in mammals including mouse and rat," says Anderson, the Seymour Benzer Professor of Biology at Caltech, and corresponding author of the study. "In addition, there are some recent papers implicating increased levels of this hormone in people with personality disorders that lead to higher levels of aggression."

The team’s findings are outlined in the January 16 version of the journal Cell.

Read more

Filed under fruit flies fighting aggression neuropeptide neurons tachykinin neuroscience science

274 notes

Brain interactions differ between religious and non-religious subjects

An Auburn University researcher teamed up with the National Institutes of Health to study how brain networks shape an individual’s religious belief, finding that brain interactions were different between religious and non-religious subjects.

image

Gopikrishna Deshpande, an assistant professor in the Department of Electrical and Computer Engineering in Auburn’s Samuel Ginn College of Engineering, and the NIH researchers recently published their results in the journal, “Brain Connectivity.”

The group found differences in brain interactions that involved the theory of mind, or ToM, brain network, which underlies the ability to relate between one’s personal beliefs, intents and desires with those of others. Individuals with stronger ToM activity were found to be more religious. Deshpande says this supports the hypothesis that development of ToM abilities in humans during evolution may have given rise to religion in human societies.

“Religious belief is a unique human attribute observed across different cultures in the world, even in those cultures which evolved independently, such as Mayans in Central America and aboriginals in Australia,” said Deshpande, who is also a researcher at Auburn’s Magnetic Resonance Imaging Research Center. “This has led scientists to speculate that there must be a biological basis for the evolution of religion in human societies.”

Deshpande and the NIH scientists were following up a study reported in the Proceedings of the National Academy of Sciences, which used functional magnetic resonance imaging, or fMRI, to scan the brains of both self-declared religious and non-religious individuals as they contemplated three psychological dimensions of religious beliefs.

The fMRI – which allows researchers to infer specific brain regions and networks that become active when a person performs a certain mental or physical task – showed that different brain networks were activated by the three psychological dimensions; however, the amount of activation was not different in religious as compared to non-religious subjects.

(Source: wireeagle.auburn.edu)

Filed under religious belief theory of mind neuroimaging religion psychology neuroscience science

116 notes

Scientists discover two proteins that control chandelier cell architecture 
Chandelier cells are neurons that use their unique shape to act like master circuit breakers in the brain’s cerebral cortex. These cells have dozens, often hundreds, of branching axonal projections – output channels from the cell body of the neuron – that lend the full structure of a chandelier-like appearance. Each of those projections extends to a nearby excitatory neuron. The unique structure allows just one inhibitory chandelier cell to block or modify the output of literally hundreds of other cells at one time.
Without such large-scale inhibition, some circuits in the brain would seize up, as occurs in epilepsy. Abnormal chandelier cell function also has been implicated in schizophrenia. Yet after nearly 40 years of research, little is known about how these important inhibitory neurons develop and function.
In work published today in Cell Reports, a team led by CSHL Professor Linda Van Aelst identifies two proteins that control the structure of chandelier cells, and offers insight into how they are regulated.
To study the architecture of chandelier cells, Van Aelst and colleagues first had to find a way to visualize them. Generally, scientists try to find a unique marker, a sort of molecular signature, to distinguish one type of neuron from the many others in the brain. But no markers are known for chandelier cells. So Van Aelst and Yilin Tai, Ph.D., lead author on the study, developed a way to label chandelier cells within the mouse brain.
Using this new method, the team found two proteins, DOCK7 and ErbB4, whose activity is essential in processes that give chandelier cells their striking shape. When the function of these proteins is disrupted, chandelier cells have fewer, more disorganized, axonal projections. Van Aelst and colleagues used a series of biochemical experiments to explore the relationship between the two proteins. They found that DOCK7 activates ErbB4 through a previously unknown mechanism; this activation must occur if chandelier cells are to develop their characteristic architecture.
Moving forward, Van Aelst says she is interested in exploring the relationship between structure and function of chandelier cells. “We envisage that morphological changes are likely to impact the function of chandelier cells, and consequently, alter the activity of cortical networks. We believe irregularities in these networks contribute to the cognitive abnormalities characteristic of schizophrenia and epilepsy. As we move forward, therefore, we hope that our findings will improve our understanding of these devastating neurological disorders.”

Scientists discover two proteins that control chandelier cell architecture

Chandelier cells are neurons that use their unique shape to act like master circuit breakers in the brain’s cerebral cortex. These cells have dozens, often hundreds, of branching axonal projections – output channels from the cell body of the neuron – that lend the full structure of a chandelier-like appearance. Each of those projections extends to a nearby excitatory neuron. The unique structure allows just one inhibitory chandelier cell to block or modify the output of literally hundreds of other cells at one time.

Without such large-scale inhibition, some circuits in the brain would seize up, as occurs in epilepsy. Abnormal chandelier cell function also has been implicated in schizophrenia. Yet after nearly 40 years of research, little is known about how these important inhibitory neurons develop and function.

In work published today in Cell Reports, a team led by CSHL Professor Linda Van Aelst identifies two proteins that control the structure of chandelier cells, and offers insight into how they are regulated.

To study the architecture of chandelier cells, Van Aelst and colleagues first had to find a way to visualize them. Generally, scientists try to find a unique marker, a sort of molecular signature, to distinguish one type of neuron from the many others in the brain. But no markers are known for chandelier cells. So Van Aelst and Yilin Tai, Ph.D., lead author on the study, developed a way to label chandelier cells within the mouse brain.

Using this new method, the team found two proteins, DOCK7 and ErbB4, whose activity is essential in processes that give chandelier cells their striking shape. When the function of these proteins is disrupted, chandelier cells have fewer, more disorganized, axonal projections. Van Aelst and colleagues used a series of biochemical experiments to explore the relationship between the two proteins. They found that DOCK7 activates ErbB4 through a previously unknown mechanism; this activation must occur if chandelier cells are to develop their characteristic architecture.

Moving forward, Van Aelst says she is interested in exploring the relationship between structure and function of chandelier cells. “We envisage that morphological changes are likely to impact the function of chandelier cells, and consequently, alter the activity of cortical networks. We believe irregularities in these networks contribute to the cognitive abnormalities characteristic of schizophrenia and epilepsy. As we move forward, therefore, we hope that our findings will improve our understanding of these devastating neurological disorders.”

Filed under chandelier cells cerebral cortex neurons proteins DOCK7 ErbB4 neuroscience science

159 notes

How Vision Captures Sound Now Somewhat Uncertain
When listening to someone speak, we also rely on lip-reading and gestures to help us understand what the person is saying.
To link these sights and sounds, the brain has to know where each stimulus is located so it can coordinate processing of related visual and auditory aspects of the scene. That’s how we can single out a conversation when it’s one of many going on in a room.
While past research has shown that the brain creates a similar code for vision and hearing to integrate this information, Duke University researchers have found the opposite: neurons in a particular brain region respond differently, not similarly, based on whether the stimuli is visual or auditory.
The finding, which posted Jan. 15 in the journal PLOS ONE, provides insight into how vision captures the location of perceived sound.
The idea among brain researchers has been that the neurons in a brain area known as the superior colliculus employ a “zone defense” when signaling where stimuli are located. That is, each neuron monitors a particular region of an external scene and responds whenever a stimulus — either visual or auditory — appears in that location. Through teamwork, the ensemble of neurons provides coverage of the entire scene.
But the study by Duke researchers found that auditory neurons don’t behave that way. When the target was a sound, the neurons responded as if playing a game of tug-of-war, said lead author Jennifer Groh, a professor of psychology and neuroscience at Duke.   
"The neurons responded to nearly all sound locations. But how vigorously they responded depended on where the sound was," Groh said. "It’s still teamwork, but a different kind. It’s pretty cool that the neurons can use two different strategies, play two different games, at the same time."
Groh said the finding opens up a mystery: if neurons respond differently to visual and auditory stimuli at similar locations in space, then the underlying mechanism of how vision captures sound is now somewhat uncertain.
"Which neurons are ‘on’ tells you where a visual stimulus is located, but how strongly they’re ‘on’ tells you where an auditory stimulus is located," said Groh, who conducted the study with co-author Jung Ah Lee, a postdoctoral fellow at Duke.
"Both of these kinds of signals can be used to control behavior, like eye movements, but it is trickier to envision how one type of signal might directly influence the other." 
The study involved assessing the responses of neurons, located in the rostral superior colliculus of the midbrain, as two rhesus monkeys moved their eyes to visual and auditory targets.
The sensory targets — light-emitting diodes attached to the front of nine speakers — were placed 58 inches in front of the animals. The speakers were located from 24 degrees left to 24 degrees right of the monkey in 6-degree increments.  
The researchers then measured the monkey’s responses to bursts of white noise and the illuminating of the lights.
Groh said how the brain takes raw input of one form and converts it into something else “may be broadly useful for more cognitive processes.”
"As we develop a better understanding of how those computations unfold it may help us understand a little bit more about how we think," she said.

How Vision Captures Sound Now Somewhat Uncertain

When listening to someone speak, we also rely on lip-reading and gestures to help us understand what the person is saying.

To link these sights and sounds, the brain has to know where each stimulus is located so it can coordinate processing of related visual and auditory aspects of the scene. That’s how we can single out a conversation when it’s one of many going on in a room.

While past research has shown that the brain creates a similar code for vision and hearing to integrate this information, Duke University researchers have found the opposite: neurons in a particular brain region respond differently, not similarly, based on whether the stimuli is visual or auditory.

The finding, which posted Jan. 15 in the journal PLOS ONE, provides insight into how vision captures the location of perceived sound.

The idea among brain researchers has been that the neurons in a brain area known as the superior colliculus employ a “zone defense” when signaling where stimuli are located. That is, each neuron monitors a particular region of an external scene and responds whenever a stimulus — either visual or auditory — appears in that location. Through teamwork, the ensemble of neurons provides coverage of the entire scene.

But the study by Duke researchers found that auditory neurons don’t behave that way. When the target was a sound, the neurons responded as if playing a game of tug-of-war, said lead author Jennifer Groh, a professor of psychology and neuroscience at Duke.   

"The neurons responded to nearly all sound locations. But how vigorously they responded depended on where the sound was," Groh said. "It’s still teamwork, but a different kind. It’s pretty cool that the neurons can use two different strategies, play two different games, at the same time."

Groh said the finding opens up a mystery: if neurons respond differently to visual and auditory stimuli at similar locations in space, then the underlying mechanism of how vision captures sound is now somewhat uncertain.

"Which neurons are ‘on’ tells you where a visual stimulus is located, but how strongly they’re ‘on’ tells you where an auditory stimulus is located," said Groh, who conducted the study with co-author Jung Ah Lee, a postdoctoral fellow at Duke.

"Both of these kinds of signals can be used to control behavior, like eye movements, but it is trickier to envision how one type of signal might directly influence the other." 

The study involved assessing the responses of neurons, located in the rostral superior colliculus of the midbrain, as two rhesus monkeys moved their eyes to visual and auditory targets.

The sensory targets — light-emitting diodes attached to the front of nine speakers — were placed 58 inches in front of the animals. The speakers were located from 24 degrees left to 24 degrees right of the monkey in 6-degree increments.  

The researchers then measured the monkey’s responses to bursts of white noise and the illuminating of the lights.

Groh said how the brain takes raw input of one form and converts it into something else “may be broadly useful for more cognitive processes.”

"As we develop a better understanding of how those computations unfold it may help us understand a little bit more about how we think," she said.

Filed under superior colliculus neurons spatial coding psychology neuroscience science

119 notes

Global first: easing cannabis withdrawal

A world-first study led by the National Cannabis Prevention and Information Centre (NCPIC) at UNSW has revealed a breakthrough for dependent cannabis users, employing a cannabis-based medication, Sativex (nabiximols), that has been shown to provide significant relief from withdrawal symptoms.

image

“One in ten people who try cannabis go on to become dependent. As cannabis use increases around the world and more people seek treatment to help them quit, it is surprising there is no approved medication to alleviate symptoms of withdrawal. The success of this study offers considerable hope for those struggling to get through a cannabis withdrawal and remain abstinent into the future,” said Professor Jan Copeland, Director of NCPIC and Chief Investigator of the study.

“One of the greatest barriers to quitting cannabis is withdrawal and while symptoms aren’t life-threatening, they are of a severity level that causes marked distress. For many people, symptoms including irritability, depression, cannabis cravings and sleep problems, can overcome their strong desire to quit and they find themselves using again.”

The study was conducted at inpatient services of South Eastern Sydney and Hunter New England Local Health Districts.

Associate Professor Nicolas Lintzeris, Director of Drug and Alcohol Services at South Eastern Sydney Local Health District and a trial investigator said: “The study found patients treated with Sativex stayed in treatment longer, and experienced a shorter and milder withdrawal than patients receiving placebo.”

Administered as an oral spray, Sativex is only licensed in Australia for the treatment of spasticity and pain in Multiple Sclerosis (MS) patients when other medications have failed. The spray contains the cannabis extracts, cannabidiol (CBD) and delta-9-tetrahydrocannabinol (THC), which is the substance primarily responsible for the psychoactive effects of cannabis.

The lead author of the paper and study investigator Dr David Allsop noted, “While most people who use cannabis do not become dependent, those who use regularly or for an extended period run that risk. Sativex is not licensed or available for treating cannabis users at this time. Our hope is that this study will lead to further research, and possibly approval of the drug for use as a treatment for people experiencing problematic cannabis use.”

The full findings of this study have been published in international psychiatry journal, JAMA Psychiatry.

(Source: newsroom.unsw.edu.au)

Filed under nabiximols cannabis cannabis withdrawal medicine science

189 notes

At arm’s length: Plasticity of depth judgment
We need to reach for things, so a connection between arm length and our ability to judge depth accurately may make sense. Given that we grow throughout childhood, it may also seem reasonable that such an optimal depth perception distance should be flexible enough to change with a lengthening arm. Recent research in the Journal of Neuroscience provides evidence for these ideas with surprising findings: Scientists showed that they could manipulate the distance at which adult volunteers accurately perceived depth, both through sight and touch, by tricking them into thinking they had a longer reach than they really do.
In their research on depth perception, the research team, coordinated by Fulvio Domini, professor of cognitive linguistic and psychological sciences at Brown University and senior scientist collaborator at the Istituto Italiano di Tecnologia (IIT) in Italy, has found that people have a preferred distance at which they judge depth most accurately. People overestimate depth when objects are closer and underestimate depth when objects are farther away.
“When children start touching and playing with things, they don’t just do it at any distance. They do it at a small range of distances,” Domini said. “Our thought is maybe what the brain does is figure out a metric at that distance and the rest is all heuristic.”
That optimal distance where people are most accurate, it turns out, depends on their mind’s perception of arm length. In the experiments first published Oct. 23 in the journal, lead author Robert Volcic of IIT, Domini, and their co-authors demonstrated the importance in depth perception of arm length by manipulating it.
In experiments conducted at IIT with 41 volunteers, those they “trained” to think their arms were reaching farther than they really were subconsciously accepted that fiction and shifted the distance at which they best judge depth farther away. They also had a finer ability to discriminate between two separate tactile stimuli, in that they could perceive them as distinct with less distance between them than before.
Virtual games, real effects
For their experiments, Volcic and colleagues asked volunteers to engage in three depth perception tasks — two visual and one tactile — both before and after a reach “training” exercise.
All the experiments were done in darkness so that the subjects couldn’t see their actual arms or hands. Instead, one visual task group was presented with a 3-D computer-generated image of three rods in a triangle configuration (like the front three pins in bowling) at various distances away from their eyes. Their task was to use a computer mouse to indicate how far apart the rods appeared to them. Another visual group, this time equipped with motion tracking markers, indicated the spacing of the rods at various distances with their index finger and thumb, like the pinch one does on a smartphone.
The tactile task group was given either a single or a pair of little pokes on the forearm. The pairs of pokes started very close together and slowly moved farther and farther apart in space. The subjects were asked to report when, if ever, they felt two pokes instead of one. In so doing they revealed how far apart the pokes had to be for them to feel distinct.
The training at the intermission of each of these tasks was where the scientists tricked a random subset of the subjects into thinking their reach was longer than it was. With motion capture tags on their arms and fingers, the volunteers reached out for a virtual 3-D cylinder with their right arm. The position of their right index finger relative to the virtual rod was presented to them as a red dot in front of them. Some of the participants were given accurate information about the position of their finger and some were given information that presented their finger as 15 centimeters (about 6 inches) closer to the object than they really were — as if they had longer arms.
After the training, the subjects who were tricked into perceiving longer arms also shifted the distance at which they judged depth best. They also required less distance between pokes on their forearm before they could distinguish them. People whose reach was presented accurately — who were not “retrained” — continued with the same accurate depth perception distance and distance for discriminating the pokes.
Not only did the retrained subjects’ perceptions change, Domini said, but also the precise degree of the changes could be accurately predicted ahead of time by mathematical models that incorporate perceived arm length and depth perception at that distance.
How we perceive
The findings of a role for arm length may help to explain depth perception and the limits of its accuracy, Domini said. In addition, the finding that depth perception can be predictably manipulated by changing perceived arm length could also matter to designers of robotic proxies, exoskeletons, and robotic surgery.
The research also raises a fundamental neuroscience question about how two different senses — vision and touch — are both influenced by perception of the arm.
The researchers conclude, “Even in adulthood sensory systems are not fixed structures with immutable functions. … We have instead found strong sensory plasticity that can be evoked within minutes in adults.”

At arm’s length: Plasticity of depth judgment

We need to reach for things, so a connection between arm length and our ability to judge depth accurately may make sense. Given that we grow throughout childhood, it may also seem reasonable that such an optimal depth perception distance should be flexible enough to change with a lengthening arm. Recent research in the Journal of Neuroscience provides evidence for these ideas with surprising findings: Scientists showed that they could manipulate the distance at which adult volunteers accurately perceived depth, both through sight and touch, by tricking them into thinking they had a longer reach than they really do.

In their research on depth perception, the research team, coordinated by Fulvio Domini, professor of cognitive linguistic and psychological sciences at Brown University and senior scientist collaborator at the Istituto Italiano di Tecnologia (IIT) in Italy, has found that people have a preferred distance at which they judge depth most accurately. People overestimate depth when objects are closer and underestimate depth when objects are farther away.

“When children start touching and playing with things, they don’t just do it at any distance. They do it at a small range of distances,” Domini said. “Our thought is maybe what the brain does is figure out a metric at that distance and the rest is all heuristic.”

That optimal distance where people are most accurate, it turns out, depends on their mind’s perception of arm length. In the experiments first published Oct. 23 in the journal, lead author Robert Volcic of IIT, Domini, and their co-authors demonstrated the importance in depth perception of arm length by manipulating it.

In experiments conducted at IIT with 41 volunteers, those they “trained” to think their arms were reaching farther than they really were subconsciously accepted that fiction and shifted the distance at which they best judge depth farther away. They also had a finer ability to discriminate between two separate tactile stimuli, in that they could perceive them as distinct with less distance between them than before.

Virtual games, real effects

For their experiments, Volcic and colleagues asked volunteers to engage in three depth perception tasks — two visual and one tactile — both before and after a reach “training” exercise.

All the experiments were done in darkness so that the subjects couldn’t see their actual arms or hands. Instead, one visual task group was presented with a 3-D computer-generated image of three rods in a triangle configuration (like the front three pins in bowling) at various distances away from their eyes. Their task was to use a computer mouse to indicate how far apart the rods appeared to them. Another visual group, this time equipped with motion tracking markers, indicated the spacing of the rods at various distances with their index finger and thumb, like the pinch one does on a smartphone.

The tactile task group was given either a single or a pair of little pokes on the forearm. The pairs of pokes started very close together and slowly moved farther and farther apart in space. The subjects were asked to report when, if ever, they felt two pokes instead of one. In so doing they revealed how far apart the pokes had to be for them to feel distinct.

The training at the intermission of each of these tasks was where the scientists tricked a random subset of the subjects into thinking their reach was longer than it was. With motion capture tags on their arms and fingers, the volunteers reached out for a virtual 3-D cylinder with their right arm. The position of their right index finger relative to the virtual rod was presented to them as a red dot in front of them. Some of the participants were given accurate information about the position of their finger and some were given information that presented their finger as 15 centimeters (about 6 inches) closer to the object than they really were — as if they had longer arms.

After the training, the subjects who were tricked into perceiving longer arms also shifted the distance at which they judged depth best. They also required less distance between pokes on their forearm before they could distinguish them. People whose reach was presented accurately — who were not “retrained” — continued with the same accurate depth perception distance and distance for discriminating the pokes.

Not only did the retrained subjects’ perceptions change, Domini said, but also the precise degree of the changes could be accurately predicted ahead of time by mathematical models that incorporate perceived arm length and depth perception at that distance.

How we perceive

The findings of a role for arm length may help to explain depth perception and the limits of its accuracy, Domini said. In addition, the finding that depth perception can be predictably manipulated by changing perceived arm length could also matter to designers of robotic proxies, exoskeletons, and robotic surgery.

The research also raises a fundamental neuroscience question about how two different senses — vision and touch — are both influenced by perception of the arm.

The researchers conclude, “Even in adulthood sensory systems are not fixed structures with immutable functions. … We have instead found strong sensory plasticity that can be evoked within minutes in adults.”

Filed under depth perception visuomotor adaptation 3D perception neuroscience science

free counters