Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

274 notes

Brain interactions differ between religious and non-religious subjects

An Auburn University researcher teamed up with the National Institutes of Health to study how brain networks shape an individual’s religious belief, finding that brain interactions were different between religious and non-religious subjects.

image

Gopikrishna Deshpande, an assistant professor in the Department of Electrical and Computer Engineering in Auburn’s Samuel Ginn College of Engineering, and the NIH researchers recently published their results in the journal, “Brain Connectivity.”

The group found differences in brain interactions that involved the theory of mind, or ToM, brain network, which underlies the ability to relate between one’s personal beliefs, intents and desires with those of others. Individuals with stronger ToM activity were found to be more religious. Deshpande says this supports the hypothesis that development of ToM abilities in humans during evolution may have given rise to religion in human societies.

“Religious belief is a unique human attribute observed across different cultures in the world, even in those cultures which evolved independently, such as Mayans in Central America and aboriginals in Australia,” said Deshpande, who is also a researcher at Auburn’s Magnetic Resonance Imaging Research Center. “This has led scientists to speculate that there must be a biological basis for the evolution of religion in human societies.”

Deshpande and the NIH scientists were following up a study reported in the Proceedings of the National Academy of Sciences, which used functional magnetic resonance imaging, or fMRI, to scan the brains of both self-declared religious and non-religious individuals as they contemplated three psychological dimensions of religious beliefs.

The fMRI – which allows researchers to infer specific brain regions and networks that become active when a person performs a certain mental or physical task – showed that different brain networks were activated by the three psychological dimensions; however, the amount of activation was not different in religious as compared to non-religious subjects.

(Source: wireeagle.auburn.edu)

Filed under religious belief theory of mind neuroimaging religion psychology neuroscience science

116 notes

Scientists discover two proteins that control chandelier cell architecture 
Chandelier cells are neurons that use their unique shape to act like master circuit breakers in the brain’s cerebral cortex. These cells have dozens, often hundreds, of branching axonal projections – output channels from the cell body of the neuron – that lend the full structure of a chandelier-like appearance. Each of those projections extends to a nearby excitatory neuron. The unique structure allows just one inhibitory chandelier cell to block or modify the output of literally hundreds of other cells at one time.
Without such large-scale inhibition, some circuits in the brain would seize up, as occurs in epilepsy. Abnormal chandelier cell function also has been implicated in schizophrenia. Yet after nearly 40 years of research, little is known about how these important inhibitory neurons develop and function.
In work published today in Cell Reports, a team led by CSHL Professor Linda Van Aelst identifies two proteins that control the structure of chandelier cells, and offers insight into how they are regulated.
To study the architecture of chandelier cells, Van Aelst and colleagues first had to find a way to visualize them. Generally, scientists try to find a unique marker, a sort of molecular signature, to distinguish one type of neuron from the many others in the brain. But no markers are known for chandelier cells. So Van Aelst and Yilin Tai, Ph.D., lead author on the study, developed a way to label chandelier cells within the mouse brain.
Using this new method, the team found two proteins, DOCK7 and ErbB4, whose activity is essential in processes that give chandelier cells their striking shape. When the function of these proteins is disrupted, chandelier cells have fewer, more disorganized, axonal projections. Van Aelst and colleagues used a series of biochemical experiments to explore the relationship between the two proteins. They found that DOCK7 activates ErbB4 through a previously unknown mechanism; this activation must occur if chandelier cells are to develop their characteristic architecture.
Moving forward, Van Aelst says she is interested in exploring the relationship between structure and function of chandelier cells. “We envisage that morphological changes are likely to impact the function of chandelier cells, and consequently, alter the activity of cortical networks. We believe irregularities in these networks contribute to the cognitive abnormalities characteristic of schizophrenia and epilepsy. As we move forward, therefore, we hope that our findings will improve our understanding of these devastating neurological disorders.”

Scientists discover two proteins that control chandelier cell architecture

Chandelier cells are neurons that use their unique shape to act like master circuit breakers in the brain’s cerebral cortex. These cells have dozens, often hundreds, of branching axonal projections – output channels from the cell body of the neuron – that lend the full structure of a chandelier-like appearance. Each of those projections extends to a nearby excitatory neuron. The unique structure allows just one inhibitory chandelier cell to block or modify the output of literally hundreds of other cells at one time.

Without such large-scale inhibition, some circuits in the brain would seize up, as occurs in epilepsy. Abnormal chandelier cell function also has been implicated in schizophrenia. Yet after nearly 40 years of research, little is known about how these important inhibitory neurons develop and function.

In work published today in Cell Reports, a team led by CSHL Professor Linda Van Aelst identifies two proteins that control the structure of chandelier cells, and offers insight into how they are regulated.

To study the architecture of chandelier cells, Van Aelst and colleagues first had to find a way to visualize them. Generally, scientists try to find a unique marker, a sort of molecular signature, to distinguish one type of neuron from the many others in the brain. But no markers are known for chandelier cells. So Van Aelst and Yilin Tai, Ph.D., lead author on the study, developed a way to label chandelier cells within the mouse brain.

Using this new method, the team found two proteins, DOCK7 and ErbB4, whose activity is essential in processes that give chandelier cells their striking shape. When the function of these proteins is disrupted, chandelier cells have fewer, more disorganized, axonal projections. Van Aelst and colleagues used a series of biochemical experiments to explore the relationship between the two proteins. They found that DOCK7 activates ErbB4 through a previously unknown mechanism; this activation must occur if chandelier cells are to develop their characteristic architecture.

Moving forward, Van Aelst says she is interested in exploring the relationship between structure and function of chandelier cells. “We envisage that morphological changes are likely to impact the function of chandelier cells, and consequently, alter the activity of cortical networks. We believe irregularities in these networks contribute to the cognitive abnormalities characteristic of schizophrenia and epilepsy. As we move forward, therefore, we hope that our findings will improve our understanding of these devastating neurological disorders.”

Filed under chandelier cells cerebral cortex neurons proteins DOCK7 ErbB4 neuroscience science

159 notes

How Vision Captures Sound Now Somewhat Uncertain
When listening to someone speak, we also rely on lip-reading and gestures to help us understand what the person is saying.
To link these sights and sounds, the brain has to know where each stimulus is located so it can coordinate processing of related visual and auditory aspects of the scene. That’s how we can single out a conversation when it’s one of many going on in a room.
While past research has shown that the brain creates a similar code for vision and hearing to integrate this information, Duke University researchers have found the opposite: neurons in a particular brain region respond differently, not similarly, based on whether the stimuli is visual or auditory.
The finding, which posted Jan. 15 in the journal PLOS ONE, provides insight into how vision captures the location of perceived sound.
The idea among brain researchers has been that the neurons in a brain area known as the superior colliculus employ a “zone defense” when signaling where stimuli are located. That is, each neuron monitors a particular region of an external scene and responds whenever a stimulus — either visual or auditory — appears in that location. Through teamwork, the ensemble of neurons provides coverage of the entire scene.
But the study by Duke researchers found that auditory neurons don’t behave that way. When the target was a sound, the neurons responded as if playing a game of tug-of-war, said lead author Jennifer Groh, a professor of psychology and neuroscience at Duke.   
"The neurons responded to nearly all sound locations. But how vigorously they responded depended on where the sound was," Groh said. "It’s still teamwork, but a different kind. It’s pretty cool that the neurons can use two different strategies, play two different games, at the same time."
Groh said the finding opens up a mystery: if neurons respond differently to visual and auditory stimuli at similar locations in space, then the underlying mechanism of how vision captures sound is now somewhat uncertain.
"Which neurons are ‘on’ tells you where a visual stimulus is located, but how strongly they’re ‘on’ tells you where an auditory stimulus is located," said Groh, who conducted the study with co-author Jung Ah Lee, a postdoctoral fellow at Duke.
"Both of these kinds of signals can be used to control behavior, like eye movements, but it is trickier to envision how one type of signal might directly influence the other." 
The study involved assessing the responses of neurons, located in the rostral superior colliculus of the midbrain, as two rhesus monkeys moved their eyes to visual and auditory targets.
The sensory targets — light-emitting diodes attached to the front of nine speakers — were placed 58 inches in front of the animals. The speakers were located from 24 degrees left to 24 degrees right of the monkey in 6-degree increments.  
The researchers then measured the monkey’s responses to bursts of white noise and the illuminating of the lights.
Groh said how the brain takes raw input of one form and converts it into something else “may be broadly useful for more cognitive processes.”
"As we develop a better understanding of how those computations unfold it may help us understand a little bit more about how we think," she said.

How Vision Captures Sound Now Somewhat Uncertain

When listening to someone speak, we also rely on lip-reading and gestures to help us understand what the person is saying.

To link these sights and sounds, the brain has to know where each stimulus is located so it can coordinate processing of related visual and auditory aspects of the scene. That’s how we can single out a conversation when it’s one of many going on in a room.

While past research has shown that the brain creates a similar code for vision and hearing to integrate this information, Duke University researchers have found the opposite: neurons in a particular brain region respond differently, not similarly, based on whether the stimuli is visual or auditory.

The finding, which posted Jan. 15 in the journal PLOS ONE, provides insight into how vision captures the location of perceived sound.

The idea among brain researchers has been that the neurons in a brain area known as the superior colliculus employ a “zone defense” when signaling where stimuli are located. That is, each neuron monitors a particular region of an external scene and responds whenever a stimulus — either visual or auditory — appears in that location. Through teamwork, the ensemble of neurons provides coverage of the entire scene.

But the study by Duke researchers found that auditory neurons don’t behave that way. When the target was a sound, the neurons responded as if playing a game of tug-of-war, said lead author Jennifer Groh, a professor of psychology and neuroscience at Duke.   

"The neurons responded to nearly all sound locations. But how vigorously they responded depended on where the sound was," Groh said. "It’s still teamwork, but a different kind. It’s pretty cool that the neurons can use two different strategies, play two different games, at the same time."

Groh said the finding opens up a mystery: if neurons respond differently to visual and auditory stimuli at similar locations in space, then the underlying mechanism of how vision captures sound is now somewhat uncertain.

"Which neurons are ‘on’ tells you where a visual stimulus is located, but how strongly they’re ‘on’ tells you where an auditory stimulus is located," said Groh, who conducted the study with co-author Jung Ah Lee, a postdoctoral fellow at Duke.

"Both of these kinds of signals can be used to control behavior, like eye movements, but it is trickier to envision how one type of signal might directly influence the other." 

The study involved assessing the responses of neurons, located in the rostral superior colliculus of the midbrain, as two rhesus monkeys moved their eyes to visual and auditory targets.

The sensory targets — light-emitting diodes attached to the front of nine speakers — were placed 58 inches in front of the animals. The speakers were located from 24 degrees left to 24 degrees right of the monkey in 6-degree increments.  

The researchers then measured the monkey’s responses to bursts of white noise and the illuminating of the lights.

Groh said how the brain takes raw input of one form and converts it into something else “may be broadly useful for more cognitive processes.”

"As we develop a better understanding of how those computations unfold it may help us understand a little bit more about how we think," she said.

Filed under superior colliculus neurons spatial coding psychology neuroscience science

189 notes

At arm’s length: Plasticity of depth judgment
We need to reach for things, so a connection between arm length and our ability to judge depth accurately may make sense. Given that we grow throughout childhood, it may also seem reasonable that such an optimal depth perception distance should be flexible enough to change with a lengthening arm. Recent research in the Journal of Neuroscience provides evidence for these ideas with surprising findings: Scientists showed that they could manipulate the distance at which adult volunteers accurately perceived depth, both through sight and touch, by tricking them into thinking they had a longer reach than they really do.
In their research on depth perception, the research team, coordinated by Fulvio Domini, professor of cognitive linguistic and psychological sciences at Brown University and senior scientist collaborator at the Istituto Italiano di Tecnologia (IIT) in Italy, has found that people have a preferred distance at which they judge depth most accurately. People overestimate depth when objects are closer and underestimate depth when objects are farther away.
“When children start touching and playing with things, they don’t just do it at any distance. They do it at a small range of distances,” Domini said. “Our thought is maybe what the brain does is figure out a metric at that distance and the rest is all heuristic.”
That optimal distance where people are most accurate, it turns out, depends on their mind’s perception of arm length. In the experiments first published Oct. 23 in the journal, lead author Robert Volcic of IIT, Domini, and their co-authors demonstrated the importance in depth perception of arm length by manipulating it.
In experiments conducted at IIT with 41 volunteers, those they “trained” to think their arms were reaching farther than they really were subconsciously accepted that fiction and shifted the distance at which they best judge depth farther away. They also had a finer ability to discriminate between two separate tactile stimuli, in that they could perceive them as distinct with less distance between them than before.
Virtual games, real effects
For their experiments, Volcic and colleagues asked volunteers to engage in three depth perception tasks — two visual and one tactile — both before and after a reach “training” exercise.
All the experiments were done in darkness so that the subjects couldn’t see their actual arms or hands. Instead, one visual task group was presented with a 3-D computer-generated image of three rods in a triangle configuration (like the front three pins in bowling) at various distances away from their eyes. Their task was to use a computer mouse to indicate how far apart the rods appeared to them. Another visual group, this time equipped with motion tracking markers, indicated the spacing of the rods at various distances with their index finger and thumb, like the pinch one does on a smartphone.
The tactile task group was given either a single or a pair of little pokes on the forearm. The pairs of pokes started very close together and slowly moved farther and farther apart in space. The subjects were asked to report when, if ever, they felt two pokes instead of one. In so doing they revealed how far apart the pokes had to be for them to feel distinct.
The training at the intermission of each of these tasks was where the scientists tricked a random subset of the subjects into thinking their reach was longer than it was. With motion capture tags on their arms and fingers, the volunteers reached out for a virtual 3-D cylinder with their right arm. The position of their right index finger relative to the virtual rod was presented to them as a red dot in front of them. Some of the participants were given accurate information about the position of their finger and some were given information that presented their finger as 15 centimeters (about 6 inches) closer to the object than they really were — as if they had longer arms.
After the training, the subjects who were tricked into perceiving longer arms also shifted the distance at which they judged depth best. They also required less distance between pokes on their forearm before they could distinguish them. People whose reach was presented accurately — who were not “retrained” — continued with the same accurate depth perception distance and distance for discriminating the pokes.
Not only did the retrained subjects’ perceptions change, Domini said, but also the precise degree of the changes could be accurately predicted ahead of time by mathematical models that incorporate perceived arm length and depth perception at that distance.
How we perceive
The findings of a role for arm length may help to explain depth perception and the limits of its accuracy, Domini said. In addition, the finding that depth perception can be predictably manipulated by changing perceived arm length could also matter to designers of robotic proxies, exoskeletons, and robotic surgery.
The research also raises a fundamental neuroscience question about how two different senses — vision and touch — are both influenced by perception of the arm.
The researchers conclude, “Even in adulthood sensory systems are not fixed structures with immutable functions. … We have instead found strong sensory plasticity that can be evoked within minutes in adults.”

At arm’s length: Plasticity of depth judgment

We need to reach for things, so a connection between arm length and our ability to judge depth accurately may make sense. Given that we grow throughout childhood, it may also seem reasonable that such an optimal depth perception distance should be flexible enough to change with a lengthening arm. Recent research in the Journal of Neuroscience provides evidence for these ideas with surprising findings: Scientists showed that they could manipulate the distance at which adult volunteers accurately perceived depth, both through sight and touch, by tricking them into thinking they had a longer reach than they really do.

In their research on depth perception, the research team, coordinated by Fulvio Domini, professor of cognitive linguistic and psychological sciences at Brown University and senior scientist collaborator at the Istituto Italiano di Tecnologia (IIT) in Italy, has found that people have a preferred distance at which they judge depth most accurately. People overestimate depth when objects are closer and underestimate depth when objects are farther away.

“When children start touching and playing with things, they don’t just do it at any distance. They do it at a small range of distances,” Domini said. “Our thought is maybe what the brain does is figure out a metric at that distance and the rest is all heuristic.”

That optimal distance where people are most accurate, it turns out, depends on their mind’s perception of arm length. In the experiments first published Oct. 23 in the journal, lead author Robert Volcic of IIT, Domini, and their co-authors demonstrated the importance in depth perception of arm length by manipulating it.

In experiments conducted at IIT with 41 volunteers, those they “trained” to think their arms were reaching farther than they really were subconsciously accepted that fiction and shifted the distance at which they best judge depth farther away. They also had a finer ability to discriminate between two separate tactile stimuli, in that they could perceive them as distinct with less distance between them than before.

Virtual games, real effects

For their experiments, Volcic and colleagues asked volunteers to engage in three depth perception tasks — two visual and one tactile — both before and after a reach “training” exercise.

All the experiments were done in darkness so that the subjects couldn’t see their actual arms or hands. Instead, one visual task group was presented with a 3-D computer-generated image of three rods in a triangle configuration (like the front three pins in bowling) at various distances away from their eyes. Their task was to use a computer mouse to indicate how far apart the rods appeared to them. Another visual group, this time equipped with motion tracking markers, indicated the spacing of the rods at various distances with their index finger and thumb, like the pinch one does on a smartphone.

The tactile task group was given either a single or a pair of little pokes on the forearm. The pairs of pokes started very close together and slowly moved farther and farther apart in space. The subjects were asked to report when, if ever, they felt two pokes instead of one. In so doing they revealed how far apart the pokes had to be for them to feel distinct.

The training at the intermission of each of these tasks was where the scientists tricked a random subset of the subjects into thinking their reach was longer than it was. With motion capture tags on their arms and fingers, the volunteers reached out for a virtual 3-D cylinder with their right arm. The position of their right index finger relative to the virtual rod was presented to them as a red dot in front of them. Some of the participants were given accurate information about the position of their finger and some were given information that presented their finger as 15 centimeters (about 6 inches) closer to the object than they really were — as if they had longer arms.

After the training, the subjects who were tricked into perceiving longer arms also shifted the distance at which they judged depth best. They also required less distance between pokes on their forearm before they could distinguish them. People whose reach was presented accurately — who were not “retrained” — continued with the same accurate depth perception distance and distance for discriminating the pokes.

Not only did the retrained subjects’ perceptions change, Domini said, but also the precise degree of the changes could be accurately predicted ahead of time by mathematical models that incorporate perceived arm length and depth perception at that distance.

How we perceive

The findings of a role for arm length may help to explain depth perception and the limits of its accuracy, Domini said. In addition, the finding that depth perception can be predictably manipulated by changing perceived arm length could also matter to designers of robotic proxies, exoskeletons, and robotic surgery.

The research also raises a fundamental neuroscience question about how two different senses — vision and touch — are both influenced by perception of the arm.

The researchers conclude, “Even in adulthood sensory systems are not fixed structures with immutable functions. … We have instead found strong sensory plasticity that can be evoked within minutes in adults.”

Filed under depth perception visuomotor adaptation 3D perception neuroscience science

419 notes

Study reveals how ecstasy acts on the brain and hints at therapeutic uses
Brain imaging experiments have revealed for the first time how ecstasy produces feelings of euphoria in users.
Results of the study at Imperial College London, parts of which were televised in Drugs Live on Channel 4 in 2012, have now been published in the journal Biological Psychiatry.
The findings hint at ways that ecstasy, or MDMA, might be useful in the treatment of anxiety and post-traumatic stress disorder (PTSD).
MDMA has been a popular recreational drug since the 1980s, but there has been little research on which areas of the brain it affects. The new study is the first to use functional magnetic resonance imaging (fMRI) on resting subjects under its influence.
Twenty-five volunteers underwent brain scans on two occasions, one after taking the drug and one after taking a placebo, without knowing which they had been given.
The results show that MDMA decreases activity in the limbic system – a set of structures involved in emotional responses. These effects were stronger in subjects who reported stronger subjective experiences, suggesting that they are related.
Communication between the medial temporal lobe and medial prefrontal cortex, which is involved in emotional control, was reduced. This effect, and the drop in activity in the limbic system, are opposite to patterns seen in patients who suffer from anxiety.
MDMA also increased communication between the amygdala and the hippocampus. Studies on patients with PTSD have found a reduction in communication between these areas.
The project was led by David Nutt, the Edmond J. Safra Professor of Neuropsychopharmacology at Imperial College London, and Professor Val Curran at UCL.
Dr Robin Carhart-Harris from the Department of Medicine at Imperial, who performed the research, said: “We found that MDMA caused reduced blood flow in regions of the brain linked to emotion and memory. These effects may be related to the feelings of euphoria that people experience on the drug.”
Professor Nutt added: “The findings suggest possible clinical uses of MDMA in treating anxiety and PTSD, but we need to be careful about drawing too many conclusions from a study in healthy volunteers. We would have to do studies in patients to see if we find the same effects.”
MDMA has been investigated as an adjunct to psychotherapy in the treatment of PTSD, with a recent pilot study in the US reporting positive preliminary results.
As part of the Imperial study, the volunteers were asked to recall their favourite and worst memories while inside the scanner. They rated their favourite memories as more vivid, emotionally intense and positive after MDMA than placebo, and they rated their worst memories less negatively. This was reflected in the way that parts of the brain were activated more or less strongly under MDMA. These results were published in the International Journal of Neuropsychopharmacology.
Dr Carhart-Harris said: “In healthy volunteers, MDMA seems to lessen the impact of painful memories. This fits with the idea that it could help patients with PTSD revisit their traumatic experiences in psychotherapy without being overwhelmed by negative emotions, but we need to do studies in PTSD patients to see if the drug affects them in the same way.”

Study reveals how ecstasy acts on the brain and hints at therapeutic uses

Brain imaging experiments have revealed for the first time how ecstasy produces feelings of euphoria in users.

Results of the study at Imperial College London, parts of which were televised in Drugs Live on Channel 4 in 2012, have now been published in the journal Biological Psychiatry.

The findings hint at ways that ecstasy, or MDMA, might be useful in the treatment of anxiety and post-traumatic stress disorder (PTSD).

MDMA has been a popular recreational drug since the 1980s, but there has been little research on which areas of the brain it affects. The new study is the first to use functional magnetic resonance imaging (fMRI) on resting subjects under its influence.

Twenty-five volunteers underwent brain scans on two occasions, one after taking the drug and one after taking a placebo, without knowing which they had been given.

The results show that MDMA decreases activity in the limbic system – a set of structures involved in emotional responses. These effects were stronger in subjects who reported stronger subjective experiences, suggesting that they are related.

Communication between the medial temporal lobe and medial prefrontal cortex, which is involved in emotional control, was reduced. This effect, and the drop in activity in the limbic system, are opposite to patterns seen in patients who suffer from anxiety.

MDMA also increased communication between the amygdala and the hippocampus. Studies on patients with PTSD have found a reduction in communication between these areas.

The project was led by David Nutt, the Edmond J. Safra Professor of Neuropsychopharmacology at Imperial College London, and Professor Val Curran at UCL.

Dr Robin Carhart-Harris from the Department of Medicine at Imperial, who performed the research, said: “We found that MDMA caused reduced blood flow in regions of the brain linked to emotion and memory. These effects may be related to the feelings of euphoria that people experience on the drug.”

Professor Nutt added: “The findings suggest possible clinical uses of MDMA in treating anxiety and PTSD, but we need to be careful about drawing too many conclusions from a study in healthy volunteers. We would have to do studies in patients to see if we find the same effects.”

MDMA has been investigated as an adjunct to psychotherapy in the treatment of PTSD, with a recent pilot study in the US reporting positive preliminary results.

As part of the Imperial study, the volunteers were asked to recall their favourite and worst memories while inside the scanner. They rated their favourite memories as more vivid, emotionally intense and positive after MDMA than placebo, and they rated their worst memories less negatively. This was reflected in the way that parts of the brain were activated more or less strongly under MDMA. These results were published in the International Journal of Neuropsychopharmacology.

Dr Carhart-Harris said: “In healthy volunteers, MDMA seems to lessen the impact of painful memories. This fits with the idea that it could help patients with PTSD revisit their traumatic experiences in psychotherapy without being overwhelmed by negative emotions, but we need to do studies in PTSD patients to see if the drug affects them in the same way.”

Filed under ecstasy MDMA limbic system prefrontal cortex temporal lobe anxiety amygdala neuroscience science

258 notes

[Figure 1: Synaptic signaling occurs when neurotransmitter molecules (glutamate) released by the presynaptic neuron travel through the synaptic cleft to activate glutamate receptors, including NMDA receptors, on the postsynaptic neuron. Image courtesy of the National Institute on Aging]
Amplifying communication between neurons
Neurons send signals to each other across small junctions called synapses. Some of these signals involve the flow of potassium, calcium and sodium ions through channel proteins that are embedded within the membranes of neurons. However, it was unclear whether the flow of potassium ions into the synaptic cleft had a physiological purpose. An international team of researchers including Alexey Semyanov from the RIKEN Brain Science Institute has now revealed that potassium ions that leak out of channel proteins and spill into the synapse augment synaptic signaling between neurons, potentially fulfilling a reinforcement mechanism in learning and memory.
Synaptic communication between neurons begins when calcium ions enter the axon terminal of one neuron—the presynaptic neuron—causing the release of neurotransmitter molecules, such as glutamate, which travel across the synaptic cleft and bind to receptor proteins on the surface of the receiving or postsynaptic neuron (Fig. 1). When the glutamate binds to a receptor known as the NMDA receptor, a channel in the receptor protein opens and calcium flows in, which initiates activation of the postsynaptic neuron.
Semyanov and his colleagues found that the opening of the NMDA receptor channel on the postsynaptic neuron also allows potassium ions to flow out of that neuron and into the synaptic cleft. Blocking the NMDA receptor prevented the rise in potassium ions within the synaptic cleft.
The NMDA receptor is generally blocked by magnesium ions, but these ions can be released from the receptor channel upon repetitive stimulation of the postsynaptic neuron. Through mathematical modeling and subsequent experiments, Semyanov and his colleagues found that potassium levels in the synaptic cleft could increase dramatically on removal of magnesium or during repeated activation of the postsynaptic neuron.
The rise in potassium in the synaptic cleft was shown to increase calcium entry into the presynaptic neuron axon terminal when the postsynaptic neuron was stimulated, and enhanced the probability that the glutamate neurotransmitter would be released from the presynaptic neuron. In this way, repeated activation of a given neuronal network, such as during learning, could augment the strength of communication between neurons, making it more likely that a given stimulus would trigger the activation of postsynaptic neurons.
"New memories are associated with long-term changes in synaptic strength following repetitive activation of the synapse, commonly known as synaptic plasticity," explains Semyanov. "Potassium accumulation and the consequent increase in probability of glutamate release can potentially aid the induction of synaptic plasticity, thus facilitating learning and memory," he says.

[Figure 1: Synaptic signaling occurs when neurotransmitter molecules (glutamate) released by the presynaptic neuron travel through the synaptic cleft to activate glutamate receptors, including NMDA receptors, on the postsynaptic neuron. Image courtesy of the National Institute on Aging]

Amplifying communication between neurons

Neurons send signals to each other across small junctions called synapses. Some of these signals involve the flow of potassium, calcium and sodium ions through channel proteins that are embedded within the membranes of neurons. However, it was unclear whether the flow of potassium ions into the synaptic cleft had a physiological purpose. An international team of researchers including Alexey Semyanov from the RIKEN Brain Science Institute has now revealed that potassium ions that leak out of channel proteins and spill into the synapse augment synaptic signaling between neurons, potentially fulfilling a reinforcement mechanism in learning and memory.

Synaptic communication between neurons begins when calcium ions enter the axon terminal of one neuron—the presynaptic neuron—causing the release of neurotransmitter molecules, such as glutamate, which travel across the synaptic cleft and bind to receptor proteins on the surface of the receiving or postsynaptic neuron (Fig. 1). When the glutamate binds to a receptor known as the NMDA receptor, a channel in the receptor protein opens and calcium flows in, which initiates activation of the postsynaptic neuron.

Semyanov and his colleagues found that the opening of the NMDA receptor channel on the postsynaptic neuron also allows potassium ions to flow out of that neuron and into the synaptic cleft. Blocking the NMDA receptor prevented the rise in potassium ions within the synaptic cleft.

The NMDA receptor is generally blocked by magnesium ions, but these ions can be released from the receptor channel upon repetitive stimulation of the postsynaptic neuron. Through mathematical modeling and subsequent experiments, Semyanov and his colleagues found that potassium levels in the synaptic cleft could increase dramatically on removal of magnesium or during repeated activation of the postsynaptic neuron.

The rise in potassium in the synaptic cleft was shown to increase calcium entry into the presynaptic neuron axon terminal when the postsynaptic neuron was stimulated, and enhanced the probability that the glutamate neurotransmitter would be released from the presynaptic neuron. In this way, repeated activation of a given neuronal network, such as during learning, could augment the strength of communication between neurons, making it more likely that a given stimulus would trigger the activation of postsynaptic neurons.

"New memories are associated with long-term changes in synaptic strength following repetitive activation of the synapse, commonly known as synaptic plasticity," explains Semyanov. "Potassium accumulation and the consequent increase in probability of glutamate release can potentially aid the induction of synaptic plasticity, thus facilitating learning and memory," he says.

Filed under neurons synapses potassium ions learning memory neuroscience science

132 notes

Age no obstacle to nerve cell regeneration
In aging worms at least, it is insulin, not Father Time, that inhibits a motor neuron’s ability to repair itself — a finding that suggests declines in nervous system health may not be inevitable.
All organisms show a declining ability to regenerate damaged nervous systems with age, but the study appearing in the Feb. 5 issue of the journal Neuron suggests this deficit is not due to the ravages of time.
“The nervous system regulates its own response to age, separately from what happens in the rest of the body,” said Marc Hammarlund, assistant professor of genetics and senior author of the new study. “By manipulating the insulin pathway, we can make animals that live longer but have nervous systems that age normally, or conversely, we can make animals that die at a normal age but have a young nervous system.”
Alexandra Byrne, postdoctoral associate in genetics and lead author of the study, identified two genetic pathways that regulate insulin activity and are responsible for age-related declines in a worm’s ability to regenerate neuronal axons, or connective branches. The team pinpointed two other pathways that also regulate a neuron’s ability to regenerate, but that have no connection to the age of the worm.
The worm C. elegans is a well-established model to study the genetics of aging, and manipulation of the family of genes that regulate insulin activity has been shown to dramatically increase lifespan of the organism. The new study reveals that insulin signaling is also directly affecting the nervous system.
“We hope to understand how different pathways coordinately regulate neuronal aging, and more specifically, how to entice an aged neuron to regenerate after injury,” Byrne said.
“The hope is to increase healthspan, not just lifespan,” Hammarlund said.

Age no obstacle to nerve cell regeneration

In aging worms at least, it is insulin, not Father Time, that inhibits a motor neuron’s ability to repair itself — a finding that suggests declines in nervous system health may not be inevitable.

All organisms show a declining ability to regenerate damaged nervous systems with age, but the study appearing in the Feb. 5 issue of the journal Neuron suggests this deficit is not due to the ravages of time.

“The nervous system regulates its own response to age, separately from what happens in the rest of the body,” said Marc Hammarlund, assistant professor of genetics and senior author of the new study. “By manipulating the insulin pathway, we can make animals that live longer but have nervous systems that age normally, or conversely, we can make animals that die at a normal age but have a young nervous system.”

Alexandra Byrne, postdoctoral associate in genetics and lead author of the study, identified two genetic pathways that regulate insulin activity and are responsible for age-related declines in a worm’s ability to regenerate neuronal axons, or connective branches. The team pinpointed two other pathways that also regulate a neuron’s ability to regenerate, but that have no connection to the age of the worm.

The worm C. elegans is a well-established model to study the genetics of aging, and manipulation of the family of genes that regulate insulin activity has been shown to dramatically increase lifespan of the organism. The new study reveals that insulin signaling is also directly affecting the nervous system.

“We hope to understand how different pathways coordinately regulate neuronal aging, and more specifically, how to entice an aged neuron to regenerate after injury,” Byrne said.

“The hope is to increase healthspan, not just lifespan,” Hammarlund said.

Filed under motor neurons C. elegans axon regeneration insulin aging neuroscience science

484 notes

(Image caption: A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain.)
Brain on autopilot
The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.
Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.
For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.
To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.
Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought. “In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.  
Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.
“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”
The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.

(Image caption: A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain.)

Brain on autopilot

The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.

Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.

For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.

To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.

Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought. “In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.  

Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.

“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”

The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.

Filed under daydreaming default mode network neurodegeneration neuroscience science

449 notes

Erasing traumatic memories
Nearly 8 million Americans suffer from posttraumatic stress disorder (PTSD), a condition marked by severe anxiety stemming from a traumatic event such as a battle or violent attack.
Many patients undergo psychotherapy designed to help them re-experience their traumatic memory in a safe environment so as to help them make sense of the events and overcome their fear. However, such memories can be so entrenched that this therapy doesn’t always work, especially when the traumatic event occurred many years earlier.
MIT neuroscientists have now shown that they can extinguish well-established traumatic memories in mice by giving them a type of drug called an HDAC2 inhibitor, which makes the brain’s memories more malleable, under the right conditions. Giving this type of drug to human patients receiving psychotherapy may be much more effective than psychotherapy alone, says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory.
“By inhibiting HDAC2 activity, we can drive dramatic structural changes in the brain. What happens is the brain becomes more plastic, more capable of forming very strong new memories that will override the old fearful memories,” says Tsai, the senior author of a paper describing the findings in the Jan. 16 issue of Cell.
The new study also reveals the molecular mechanism explaining why older memories are harder to extinguish. Lead authors of the paper are former Picower Institute postdoc Johannes Graff and Nadine Joseph, a technical assistant at the Picower Institute.
Genes and memories
Tsai’s lab has previously shown that when memories are formed, neurons’ chromatin — DNA packaged with proteins — undergoes extensive remodeling. These chromatin modifications make it easier to activate the genes necessary to create new memories.
In this study, the researchers focused on chromatin modifications that occur when previously acquired memories are extinguished. To do this, they first trained mice to fear a particular chamber — by administering a mild foot shock — and then tried to recondition the mice so they no longer feared it, which was done by placing the mice in the chamber where they received the shock, without delivering the shock again.
This training proved successful in mice that had experienced the traumatic event only 24 hours before the reconditioning. However, in mice whose memories were 30 days old, it was impossible to eliminate the fearful memory.
The researchers also found that in the brains of mice with 24-hour-old memories, extensive chromatin remodeling occurred during the reconditioning. For several hours after the mice were placed back in the feared chamber, there was a dramatic increase in histone acetylation of memory-related genes, caused by inactivation of the protein HDAC2. That histone acetylation makes genes more accessible, turning on the processes needed to form new memories or overwrite old ones.
In mice with 30-day-old memories, however, there was no change in histone acetylation. This suggests that re-exposure to a fearful memory opens a window of opportunity during which the memory can be altered, but only if the memory has recently been formed, Tsai says.
“If you do something within this window of time, then you have the possibility of modifying the memory or forming a new trace of memory that actually instructs the animal that this is not such a dangerous place,” she says. “However, the older the memory is, the harder it is to really change that memory.”
Based on this finding, the researchers decided to treat mice with 30-day-old memories with an HDAC2 inhibitor shortly after re-exposure to the feared chamber. Following this treatment, the traumatic memories were extinguished just as easily as in the mice with 24-hour-old memories.
The researchers also found that HDAC2 inhibitor treatment turns on a group of key genes known as immediate early genes, which then activate other genes necessary for memory formation. They also saw an increase in the number of connections among neurons in the hippocampus, where memories are formed, and in the strength of communication among these neurons.
“Our experiments really strongly argue that either the old memories are permanently being modified, or a new much more potent memory is formed that completely overwrites the old memory,” Tsai says.
“This could be a very promising way to bring older memories back, process them in the hippocampus, and then extinguish them with the correct paradigm,” says Jelena Radulovic, a professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine who was not part of the research team.
Treating anxiety
Some HDAC2 inhibitors have been approved to treat cancer, and Tsai says she believes it is worth trying such drugs to treat PTSD. “I hope this will convince people to seriously think about taking this into clinical trials and seeing how well it works,” she says.
Such drugs might also be useful in treating people who suffer from phobias and other anxiety disorders, she adds.
Tsai’s lab is now studying what happens to memory traces when re-exposure to traumatic memories occurs at different times. It is already known that memories are formed in the hippocampus and then transferred to the cortex for longer-term storage. It appears that the HDAC2 inhibitor treatment may somehow restore the memory to the hippocampus so it can be extinguished, Tsai says.

Erasing traumatic memories

Nearly 8 million Americans suffer from posttraumatic stress disorder (PTSD), a condition marked by severe anxiety stemming from a traumatic event such as a battle or violent attack.

Many patients undergo psychotherapy designed to help them re-experience their traumatic memory in a safe environment so as to help them make sense of the events and overcome their fear. However, such memories can be so entrenched that this therapy doesn’t always work, especially when the traumatic event occurred many years earlier.

MIT neuroscientists have now shown that they can extinguish well-established traumatic memories in mice by giving them a type of drug called an HDAC2 inhibitor, which makes the brain’s memories more malleable, under the right conditions. Giving this type of drug to human patients receiving psychotherapy may be much more effective than psychotherapy alone, says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory.

“By inhibiting HDAC2 activity, we can drive dramatic structural changes in the brain. What happens is the brain becomes more plastic, more capable of forming very strong new memories that will override the old fearful memories,” says Tsai, the senior author of a paper describing the findings in the Jan. 16 issue of Cell.

The new study also reveals the molecular mechanism explaining why older memories are harder to extinguish. Lead authors of the paper are former Picower Institute postdoc Johannes Graff and Nadine Joseph, a technical assistant at the Picower Institute.

Genes and memories

Tsai’s lab has previously shown that when memories are formed, neurons’ chromatin — DNA packaged with proteins — undergoes extensive remodeling. These chromatin modifications make it easier to activate the genes necessary to create new memories.

In this study, the researchers focused on chromatin modifications that occur when previously acquired memories are extinguished. To do this, they first trained mice to fear a particular chamber — by administering a mild foot shock — and then tried to recondition the mice so they no longer feared it, which was done by placing the mice in the chamber where they received the shock, without delivering the shock again.

This training proved successful in mice that had experienced the traumatic event only 24 hours before the reconditioning. However, in mice whose memories were 30 days old, it was impossible to eliminate the fearful memory.

The researchers also found that in the brains of mice with 24-hour-old memories, extensive chromatin remodeling occurred during the reconditioning. For several hours after the mice were placed back in the feared chamber, there was a dramatic increase in histone acetylation of memory-related genes, caused by inactivation of the protein HDAC2. That histone acetylation makes genes more accessible, turning on the processes needed to form new memories or overwrite old ones.

In mice with 30-day-old memories, however, there was no change in histone acetylation. This suggests that re-exposure to a fearful memory opens a window of opportunity during which the memory can be altered, but only if the memory has recently been formed, Tsai says.

“If you do something within this window of time, then you have the possibility of modifying the memory or forming a new trace of memory that actually instructs the animal that this is not such a dangerous place,” she says. “However, the older the memory is, the harder it is to really change that memory.”

Based on this finding, the researchers decided to treat mice with 30-day-old memories with an HDAC2 inhibitor shortly after re-exposure to the feared chamber. Following this treatment, the traumatic memories were extinguished just as easily as in the mice with 24-hour-old memories.

The researchers also found that HDAC2 inhibitor treatment turns on a group of key genes known as immediate early genes, which then activate other genes necessary for memory formation. They also saw an increase in the number of connections among neurons in the hippocampus, where memories are formed, and in the strength of communication among these neurons.

“Our experiments really strongly argue that either the old memories are permanently being modified, or a new much more potent memory is formed that completely overwrites the old memory,” Tsai says.

“This could be a very promising way to bring older memories back, process them in the hippocampus, and then extinguish them with the correct paradigm,” says Jelena Radulovic, a professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine who was not part of the research team.

Treating anxiety

Some HDAC2 inhibitors have been approved to treat cancer, and Tsai says she believes it is worth trying such drugs to treat PTSD. “I hope this will convince people to seriously think about taking this into clinical trials and seeing how well it works,” she says.

Such drugs might also be useful in treating people who suffer from phobias and other anxiety disorders, she adds.

Tsai’s lab is now studying what happens to memory traces when re-exposure to traumatic memories occurs at different times. It is already known that memories are formed in the hippocampus and then transferred to the cortex for longer-term storage. It appears that the HDAC2 inhibitor treatment may somehow restore the memory to the hippocampus so it can be extinguished, Tsai says.

Filed under PTSD anxiety hippocampus HDAC2 memory psychology neuroscience science

212 notes

How metabolism and brain activity are linked

A new study by scientists at McGill University and the University of Zurich shows a direct link between metabolism in brain cells and their ability to signal information. The research may explain why the seizures of many epilepsy patients can be controlled by a specially formulated diet. 

image

(Image caption: Neurons in the cerebellum. Credit: Bowie Lab/McGill University)

The findings, published Jan. 16 in Nature Communications, reveal that metabolism controls the processes that inhibit brain activity, such as that involved in convulsions. The study uncovers a link between how brain cells make energy and how the same cells signal information – processes that neuroscientists have often assumed to be distinct and separate. 

“Inhibition in the brain is commonly targeted in clinical practice,” notes Derek Bowie, Canada Research Chair in Receptor Pharmacology at McGill and corresponding author of the study. “For example, drugs that alleviate anxiety, induce anesthesia, or even control epilepsy work by strengthening brain inhibition. These pharmacological approaches can have their drawbacks, since patients often complain of unpleasant side effects.” 

The experiments showed an unexpected link between how the mitochondria of brain cells make energy and how the same cells signal information. Brain cells couple these two independent functions by using small chemical messengers, called reactive oxygen species (or ROS), that are normally associated with signaling cell death. While ROS are known to have roles in diseases of aging, such as Alzheimer’s and Parkinson’s, the new study shows they also play important roles in the healthy brain.  

The findings emerged from an ongoing collaboration between Prof. Bowie’s laboratory in McGill’s Department of Pharmacology and Therapeutics and a research team headed by Dr. Jean-Marc Fritschy, Professor of Pharmacology at the University of Zurich and current director of the Neuroscience Center Zurich (ZNZ). The researchers have the longer term aim of trying to understand why the seizures of many epilepsy patients — especially young children – can be treated with a high-fat, low-carbohydrate diet known as the ketogenic diet. 

The idea that diet can control seizures was noticed as far back as ancient Greece, during periods of fasting. From the 1920s until the 1950s, the ketogenic diet was widely used to treat epilepsy patients. With the introduction of anticonvulsant drugs in the 1950s, the dietary approach fell out of favour with doctors. But because anticonvulsant drugs don’t work for 20% to 30% of patients, there has been a resurgence in use of the ketogenic diet. 

“Since our study shows that brain cells have their own means to strengthen inhibition,” explains Prof Bowie, “our work points to potentially new ways in which to control a number of important neurological conditions including epilepsy.”

(Source: mcgill.ca)

Filed under cerebellum mitochondria metabolism brain cells ketogenic diet epilepsy neuroscience science

free counters