Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

99 notes

Scientists Find an Unlikely Stress Responder May Protect Against Alzheimer’s

In surprise findings, scientists at The Scripps Research Institute (TSRI) have discovered that a protein with a propensity to form harmful aggregates in the body when produced in the liver protects against Alzheimer’s disease aggregates when it is produced in the brain. The results suggest that drugs that can boost the protein’s production specifically in neurons could one day help ward off Alzheimer’s disease.

“This result was completely unexpected when we started this research,” said TSRI Professor Joel N. Buxbaum, MD. “But now we realize that it could indicate a new approach for Alzheimer’s prevention and therapy.”

Buxbaum and members of his laboratory report their latest finding in the May 21, 2014 issue of the Journal of Neuroscience.

First Hints

The study centers on transthyretin (TTR), a protein that is known to function as a transporter, carrying the thyroid hormone thyroxine and vitamin A through the bloodstream and cerebrospinal fluid. To do this job, it must come together in a four subunit structure called a tetramer. Certain factors such as old age and TTR gene mutations can make these tetramers prone to fall apart and misfold into tough aggregates called amyloids. TTR amyloids accumulate in the heart, kidneys, peripheral nerves and other tissues and cause life-shortening diseases including familial amyloid polyneuropathy and senile systemic (cardiac) amyloidosis.

Starting in the mid 1990s, however, reports from several laboratories hinted that TTR in the brain might protect against other amyloids—particularly the Alzheimer’s-associated protein amyloid beta. In test tube experiments, TTR seemed able to grab hold of amyloid beta and prevent it from aggregating. In transgenic “Alzheimer’s mice,” which overproduce amyloid beta, TTR expression was increased in affected brain tissue, compared to control mice, as one would expect from a protective response.

“I didn’t really believe those reports at the time,” Buxbaum said.

But he was working on TTR amyloidoses and had the tools needed to investigate the issue genetically. He and his colleagues at TSRI did those experiments, and found, to their surprise, that overproducing TTR in “Alzheimer’s mice” did indeed protect the animals: it reduced their memory deficits as well as the accumulations of amyloid beta aggregates in their brains. Since that 2008 study, Buxbaum and colleagues have gone on to publish additional experiments examining the mechanism of the protection including two last year, in collaboration with the Wright and Kelly laboratories at TSRI and Roberta Cascella in Florence, that showed how TTR tetramers can bind to amyloid beta and inhibit the latter from forming the more harmful types of aggregate.

Context Is Everything

In the latest study, Buxbaum and his team, including lead authors Xin Wang and Francesca Cattaneo, at the time both postdoctoral fellows in the Buxbaum laboratory, found another key piece of evidence for TTR’s protective role.

TTR is known to be produced principally in the liver and in the parts of the brain where cerebrospinal fluid is made. Prior studies in the Buxbaum group found evidence that TTR can also be produced in neurons, albeit at low levels. Still, it has remained unclear how TTR production, in neurons or in other cells, would be increased in response to amyloid beta accumulation.

To start, the team analyzed a segment of DNA near the TTR gene called the promoter region, where, in principle, special DNA-binding proteins called transcription factors could increase TTR gene activity. The analysis suggested that Heat Shock Factor 1 (HSF1), known as a master switch for a broad protective response against certain types of cellular stress, could bind to the TTR gene’s promoter.

Further experiments showed that HSF1 does indeed bind to this region and that two known stimulators of HSF1—heat and a compound called celastrol—also boost HSF1 binding to the TTR promoter, in addition to boosting TTR production. Remarkably, though, the researchers found that HSF1’s dialing-up of TTR production seemed to occur only in neuronal-type cells, not in liver cells where most TTR is produced.

In fact, the researchers found that in liver cells the HSF1 response somehow brought about a modest decrease in TTR production. That result may seem puzzling, but it is consistent with the idea that liver-cell TTR, which is produced at 15 to 20 times the levels of neuronal TTR, is more likely to be hazardous than protective.

Using genetic techniques to force cells to overproduce HSF1, the researchers again saw jumps in TTR gene activity and protein production, but only in neuronal cells. In liver cells TTR activity rose when HSF1 was blocked, suggesting that HSF1 normally helps keep a lid on liver TTR production.

“It’s becoming more and more evident in biology that the same molecule can do very different things in different contexts,” Buxbaum said.

To underscore the relevance to Alzheimer’s, his team examined neurons from the hippocampus brain region in ordinary lab mice and in amyloid-beta-overproducing Alzheimer’s mice. Again consistent with the concept of TTR as protective in neurons, they found that the frequency of HSF1 binding to the TTR gene promoter, and the numbers of resulting TTR gene transcripts, were both doubled in the Alzheimer’s mice compared to the ordinary lab mice.

Buxbaum and his colleagues plan to do further research on this apparent TTR-mediated stress response in neurons to determine, among other things, precisely how Alzheimer’s-associated amyloid beta switches it on. But they have already begun to think about developing a small molecule compound, suitable for delivery in a pill, that at least modestly boosts HSF1 activity and/or TTR production in neurons—and thus might prevent or delay Alzheimer’s dementia.

(Source: scripps.edu)

Filed under alzheimer's disease transthyretin thyroxine hippocampus neurons beta amyloid neuroscience science

153 notes

Neuroscience’s grand question
When your car needs a new spark plug, you take it to a shop where it sits, out of commission, until the repair is finished. But what if your car could replace its own spark plug while speeding down the Mass Pike? 
Of course, cars can’t do that, but our nervous system does the equivalent, rebuilding itself continually while maintaining full function. 
Neurons live for many years but their components, the proteins and molecules that make up the cell, are continually being replaced. How this continuous rebuilding takes place without affecting our ability to think, remember, learn or otherwise experience the world is one of neuroscience’s biggest questions.
And it’s one that has long intrigued Eve Marder, the Victor and Gwendolyn Beinfield Professor of Neuroscience. As reported in Neuron on May 21, Marder’s lab has built a new theoretical model to understand how cells monitor and self-regulate their properties in the face of continual turnover of cellular components.
Ion channels, the molecular gates on the surface of cells, determine neuronal properties needed to regulate everything from the size and speed of limb movement to how sensory information is processed. Different combinations of types of ion channels are found in each kind of neuron. Receptors are the molecular ‘microphones’ that enable neurons to communicate with each other.
Receptors and ion channels are constantly turning over, so cells need to regulate the rate at which they are replaced in a way that avoids disrupting normal nervous system function. Scientists have considered the idea of a ‘factory’ or ‘default’ setting for the numbers of ion channels and receptors in each neuron. But this idea seems implausible because there is so much change in a neuron’s environment over the course of its life. 
If there is no factory setting, then neurons need an internal gauge to monitor electrical activity and adjust ion channel expression accordingly, the team asserts. Because a single neuron is always part of a larger circuit, it also needs to do this while maintaining homeostasis across the nervous system.
The Marder lab built a new theoretical model of ion channel regulation based on the concept of an internal monitoring system. The team, comprised of postdoctoral fellow Timothy O’Leary, lab technician Alex Williams, Alessio Franci, of the University of Liege in Belgium, and Marder, discovered that cells don’t need to measure every detail of activity to keep the system functioning. In fact, too much detail can derail the process. 
“Certain target properties can contradict each other,” O’Leary says. “You would not set your air conditioning to 64 degrees and your heat to 77 degrees. One might win over the other but they would be continually fighting each other and you would end up paying a big energy bill.”
The team also learned that cells can have similar properties but different ion channel expression rates — like cellular homophones, they sound alike but look very different. 
The model showed that the very internal monitoring system designed to control runaway electrical activity can actually lead to neuronal hyperexcitability, the basis of seizures. Even if set points are maintained in single neurons, overall homeostasis in the system can be lost. 
The study represents an important advance in understanding the most complex machinery ever built — the human brain. And it may lead to entirely different therapeutic strategies for treating diseases, O’Leary says. “To understand and cure some diseases, we need to pick apart and understand how biological systems control their internal properties when they are in a normal healthy state, and this model could help researchers do that.”

Neuroscience’s grand question

When your car needs a new spark plug, you take it to a shop where it sits, out of commission, until the repair is finished. But what if your car could replace its own spark plug while speeding down the Mass Pike? 

Of course, cars can’t do that, but our nervous system does the equivalent, rebuilding itself continually while maintaining full function. 

Neurons live for many years but their components, the proteins and molecules that make up the cell, are continually being replaced. How this continuous rebuilding takes place without affecting our ability to think, remember, learn or otherwise experience the world is one of neuroscience’s biggest questions.

And it’s one that has long intrigued Eve Marder, the Victor and Gwendolyn Beinfield Professor of Neuroscience. As reported in Neuron on May 21, Marder’s lab has built a new theoretical model to understand how cells monitor and self-regulate their properties in the face of continual turnover of cellular components.

Ion channels, the molecular gates on the surface of cells, determine neuronal properties needed to regulate everything from the size and speed of limb movement to how sensory information is processed. Different combinations of types of ion channels are found in each kind of neuron. Receptors are the molecular ‘microphones’ that enable neurons to communicate with each other.

Receptors and ion channels are constantly turning over, so cells need to regulate the rate at which they are replaced in a way that avoids disrupting normal nervous system function. Scientists have considered the idea of a ‘factory’ or ‘default’ setting for the numbers of ion channels and receptors in each neuron. But this idea seems implausible because there is so much change in a neuron’s environment over the course of its life. 

If there is no factory setting, then neurons need an internal gauge to monitor electrical activity and adjust ion channel expression accordingly, the team asserts. Because a single neuron is always part of a larger circuit, it also needs to do this while maintaining homeostasis across the nervous system.

The Marder lab built a new theoretical model of ion channel regulation based on the concept of an internal monitoring system. The team, comprised of postdoctoral fellow Timothy O’Leary, lab technician Alex Williams, Alessio Franci, of the University of Liege in Belgium, and Marder, discovered that cells don’t need to measure every detail of activity to keep the system functioning. In fact, too much detail can derail the process. 

“Certain target properties can contradict each other,” O’Leary says. “You would not set your air conditioning to 64 degrees and your heat to 77 degrees. One might win over the other but they would be continually fighting each other and you would end up paying a big energy bill.”

The team also learned that cells can have similar properties but different ion channel expression rates — like cellular homophones, they sound alike but look very different. 

The model showed that the very internal monitoring system designed to control runaway electrical activity can actually lead to neuronal hyperexcitability, the basis of seizures. Even if set points are maintained in single neurons, overall homeostasis in the system can be lost. 

The study represents an important advance in understanding the most complex machinery ever built — the human brain. And it may lead to entirely different therapeutic strategies for treating diseases, O’Leary says. “To understand and cure some diseases, we need to pick apart and understand how biological systems control their internal properties when they are in a normal healthy state, and this model could help researchers do that.”

Filed under neurons ion channels neural activity neuroscience science

294 notes

Researchers find new target for chronic pain treatment
Researchers at the UNC School of Medicine have found a new target for treating chronic pain: an enzyme called PIP5K1C. In a paper published today in the journal Neuron, a team of researchers led by Mark Zylka, PhD, Associate Professor of Cell Biology and Physiology, shows that PIP5K1C controls the activity of cellular receptors that signal pain.
By reducing the level of the enzyme, researchers showed that the levels of a crucial lipid called PIP2 in pain-sensing neurons is also lessened, thus decreasing pain.
They also found a compound that could dampen the activity of PIP5K1C. This compound, currently named UNC3230, could lead to a new kind of pain reliever for the more than 100 million people who suffer from chronic pain in the United States alone.
In particular, the researchers showed that the compound might be able to significantly reduce inflammatory pain, such as arthritis, as well as neuropathic pain – damage to nerve fibers. The latter is common in conditions such as shingles, back pain, or when bodily extremities become numb due to side effects of chemotherapy or diseases such as diabetes.
The creation of such bodily pain might seem simple, but at the cellular level it’s quite complex. When we’re injured, a diverse mixture of chemicals is released, and these chemicals cause pain by acting on an equally diverse group of receptors on the surface of pain-sensing neurons.
“A big problem in our field is that it is impractical to block each of these receptors with a mixture of drugs,” said Zylka, the senior author of the Neuron article and member of the UNC Neuroscience Center. “So we looked for commonalities – things that each of these receptors need in order to send a signal.” Zylka’s team found that the lipid PIP2 (phosphatidylinositol 4,5-bisphosphate) was one of these commonalities.
“So the question became: how do we alter PIP2 levels in the neurons that sense pain?” Zylka said. “If we could lower the level of PIP2, we could get these receptors to signal less effectively. Then, in theory, we could reduce pain.”
Many different kinases can generate PIP2 in the body.  Brittany Wright, a graduate student in Zylka’s lab, found that the PIP5K1C kinase was expressed at the highest level in sensory neurons compared to other related kinases. Then the researchers used a mouse model to show that PIP5K1C was responsible for generating at least half of all PIP2 in these neurons.
“That told us that a 50 percent reduction in the levels of PIP5K1C was sufficient to reduce PIP2 levels in the tissue we were interested in – where pain-sensing neurons are located” Zylka said. “That’s what we wanted to do – block signaling at this first relay in the pain pathway.”
Once Zylka and colleagues realized that they could reduce PIP2 in sensory neurons by targeting PIP5K1C, they teamed up with Stephen Frye, PhD, the Director of the Center for Integrative Chemical Biology and Drug Discovery at the UNC Eshelman School of Pharmacy.
They screened about 5,000 small molecules to identify compounds that might block PIP5K1C. There were a number of hits, but UNC3230 was the strongest. It turned out that Zylka, Frye, and their team members had come upon a drug candidate. They realized that the chemical structure of UNC3230 could be manipulated to potentially turn it into an even better inhibitor of PIP5K1C. Experiments to do so are now underway at UNC.

Researchers find new target for chronic pain treatment

Researchers at the UNC School of Medicine have found a new target for treating chronic pain: an enzyme called PIP5K1C. In a paper published today in the journal Neuron, a team of researchers led by Mark Zylka, PhD, Associate Professor of Cell Biology and Physiology, shows that PIP5K1C controls the activity of cellular receptors that signal pain.

By reducing the level of the enzyme, researchers showed that the levels of a crucial lipid called PIPin pain-sensing neurons is also lessened, thus decreasing pain.

They also found a compound that could dampen the activity of PIP5K1C. This compound, currently named UNC3230, could lead to a new kind of pain reliever for the more than 100 million people who suffer from chronic pain in the United States alone.

In particular, the researchers showed that the compound might be able to significantly reduce inflammatory pain, such as arthritis, as well as neuropathic pain – damage to nerve fibers. The latter is common in conditions such as shingles, back pain, or when bodily extremities become numb due to side effects of chemotherapy or diseases such as diabetes.

The creation of such bodily pain might seem simple, but at the cellular level it’s quite complex. When we’re injured, a diverse mixture of chemicals is released, and these chemicals cause pain by acting on an equally diverse group of receptors on the surface of pain-sensing neurons.

“A big problem in our field is that it is impractical to block each of these receptors with a mixture of drugs,” said Zylka, the senior author of the Neuron article and member of the UNC Neuroscience Center. “So we looked for commonalities – things that each of these receptors need in order to send a signal.” Zylka’s team found that the lipid PIP2 (phosphatidylinositol 4,5-bisphosphate) was one of these commonalities.

“So the question became: how do we alter PIP2 levels in the neurons that sense pain?” Zylka said. “If we could lower the level of PIP2, we could get these receptors to signal less effectively. Then, in theory, we could reduce pain.”

Many different kinases can generate PIP2 in the body.  Brittany Wright, a graduate student in Zylka’s lab, found that the PIP5K1C kinase was expressed at the highest level in sensory neurons compared to other related kinases. Then the researchers used a mouse model to show that PIP5K1C was responsible for generating at least half of all PIP2 in these neurons.

“That told us that a 50 percent reduction in the levels of PIP5K1C was sufficient to reduce PIP2 levels in the tissue we were interested in – where pain-sensing neurons are located” Zylka said. “That’s what we wanted to do – block signaling at this first relay in the pain pathway.”

Once Zylka and colleagues realized that they could reduce PIP2 in sensory neurons by targeting PIP5K1C, they teamed up with Stephen Frye, PhD, the Director of the Center for Integrative Chemical Biology and Drug Discovery at the UNC Eshelman School of Pharmacy.

They screened about 5,000 small molecules to identify compounds that might block PIP5K1C. There were a number of hits, but UNC3230 was the strongest. It turned out that Zylka, Frye, and their team members had come upon a drug candidate. They realized that the chemical structure of UNC3230 could be manipulated to potentially turn it into an even better inhibitor of PIP5K1C. Experiments to do so are now underway at UNC.

Filed under chronic pain pain PIP5K1C dorsal root ganglia spinal cord neurons neuroscience science

74 notes

Silencers refine sound localization
A new study by LMU researchers shows that sound localization involves a complex interplay between excitatory and inhibitory signals. Pinpointing of sound sources in space would be impossible without the tuning effect of the latter.
Did that lion’s growl come from the left or the right? Or are there two of them out there? In the wild, the ability to perceive sound is of little use unless one can also pinpoint, and discriminate between, different sound sources in space. The capacity for sound localization is equally important for spatial orientation and vocal communication in humans. The underlying mechanism is known to depend on the processing of binaural signals in bilateral nerve-centers in the brainstem, where neural computations extract spatial information is extracted from them. “Each nerve-cell in the processing center receives not only excitatory but also inhibitory signals,” says LMU neurobiologist Professor Benedikt Grothe. “We have now shown how the intrinsic silencing mechanism works at the cellular level, and why it plays such a crucial role in the localization of sounds.”
Sound localization depends on the fact that the “ipsilateral” ear (the one closer to the sound source) perceives the incoming sound slightly earlier than the “contralateral” ear. Since the difference in reception time may be as brief as a fraction of a millisecond, the neural integration process in the time domain must be extremely precise. It was long thought that the direction of the source was determined solely by measuring the difference in the arrival times of excitatory signals from ipsilateral and contralateral ears. But, as Grothe explains: “Comparison of the excitatory signals alone is not sufficient to permit precise discrimination between impulses that arrive only microseconds apart.”
Inhibition reduces background distortionUsing a highly sophisticated experimental design, Grothe and his team were able to demonstrate that spatial information is distilled from four different inputs, namely pairs of inhibitory and excitatory signals arriving from each ear. Moreover, the researchers were able to elucidate the nature of the processing mechanism with the help of a technique known as dynamic patch clamping. With this method, one can measure electrical signals intracellularly, compute their combined effect in real time, and inject the resulting signal back into the cell. “This permits us to measure and manipulate electric currents within cells. By employing this highly complex approach, we were able to characterize the effects of both inhibitory and excitatory signals at the cellular level, and investigate the impact of their integration on the ability to localize sounds,” Grothe explains.
It turns out that neural inhibition controls and dynamically adjusts the time-point at which a given cell becomes maximally active. Thanks to this fine-tuning mechanism, the difference in arrival times between the right and left signals can be determined more precisely than would otherwise be possible. “This is a very dynamic process, which is utilized great precision. Above all, it allows for very rapid resetting of the relationship between the magnitudes of excitatory and inhibitory signals, which would not be feasible on the basis of only two signals,” Grothe adds. How the optimal timing offset is chosen remains unclear, but Grothe hopes that future studies will shed light on this phenomenon.

Silencers refine sound localization

A new study by LMU researchers shows that sound localization involves a complex interplay between excitatory and inhibitory signals. Pinpointing of sound sources in space would be impossible without the tuning effect of the latter.

Did that lion’s growl come from the left or the right? Or are there two of them out there? In the wild, the ability to perceive sound is of little use unless one can also pinpoint, and discriminate between, different sound sources in space. The capacity for sound localization is equally important for spatial orientation and vocal communication in humans. The underlying mechanism is known to depend on the processing of binaural signals in bilateral nerve-centers in the brainstem, where neural computations extract spatial information is extracted from them. “Each nerve-cell in the processing center receives not only excitatory but also inhibitory signals,” says LMU neurobiologist Professor Benedikt Grothe. “We have now shown how the intrinsic silencing mechanism works at the cellular level, and why it plays such a crucial role in the localization of sounds.”

Sound localization depends on the fact that the “ipsilateral” ear (the one closer to the sound source) perceives the incoming sound slightly earlier than the “contralateral” ear. Since the difference in reception time may be as brief as a fraction of a millisecond, the neural integration process in the time domain must be extremely precise. It was long thought that the direction of the source was determined solely by measuring the difference in the arrival times of excitatory signals from ipsilateral and contralateral ears. But, as Grothe explains: “Comparison of the excitatory signals alone is not sufficient to permit precise discrimination between impulses that arrive only microseconds apart.”

Inhibition reduces background distortion
Using a highly sophisticated experimental design, Grothe and his team were able to demonstrate that spatial information is distilled from four different inputs, namely pairs of inhibitory and excitatory signals arriving from each ear. Moreover, the researchers were able to elucidate the nature of the processing mechanism with the help of a technique known as dynamic patch clamping. With this method, one can measure electrical signals intracellularly, compute their combined effect in real time, and inject the resulting signal back into the cell. “This permits us to measure and manipulate electric currents within cells. By employing this highly complex approach, we were able to characterize the effects of both inhibitory and excitatory signals at the cellular level, and investigate the impact of their integration on the ability to localize sounds,” Grothe explains.

It turns out that neural inhibition controls and dynamically adjusts the time-point at which a given cell becomes maximally active. Thanks to this fine-tuning mechanism, the difference in arrival times between the right and left signals can be determined more precisely than would otherwise be possible. “This is a very dynamic process, which is utilized great precision. Above all, it allows for very rapid resetting of the relationship between the magnitudes of excitatory and inhibitory signals, which would not be feasible on the basis of only two signals,” Grothe adds. How the optimal timing offset is chosen remains unclear, but Grothe hopes that future studies will shed light on this phenomenon.

Filed under sound localization binaural processing neurons neural inhibition neuroscience science

199 notes

Receptive to music
Music can be soothing or stirring, it can make us dance or make us sad. Blood pressure, heartbeat, respiration and even body temperature – music affects the body in a variety of ways. It triggers especially powerful physical reactions in pregnant women. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have discovered that pregnant women compared to their non-pregnant counterparts rate music as more intensely pleasant and unpleasant, associated with greater changes in blood pressure. Music appears to have an especially strong influence on pregnant women, a fact that may relate to a prenatal conditioning of the fetus to music.
For their study, the Max Planck researchers played short musical sequences of 10 or 30 seconds’ duration to female volunteers. They changed the passages and played them backwards or incorporated dissonances. By doing so, they distorted the originally lively instrumental pieces and made listening to them less pleasant.
The pregnant women rated the pieces of music slightly differently, they perceived the pleasant music as more pleasant and the unpleasant as more unpleasant. The blood pressure response to music was much stronger in the pregnant group. Forward-dissonant music produced a particularly pronounced fall in blood pressure, whereas backwards-dissonant music led to a higher blood pressure after 10 seconds and a lower one after 30 seconds. “Thus, unpleasant music does not cause an across-the-board increase in blood pressure, unlike some other stress factors”, says Tom Fritz of the Max Planck Institute in Leipzig. “Instead, the body’s response is just as dynamic as the music itself.”
According to the results, music is a very special stimulus for pregnant women, to which they react strongly. “Every acoustic manipulation of music affects blood pressure in pregnant women far more intensely than in non-pregnant women”, says Fritz.  Why music has such a strong physiological influence on pregnant woman is still unknown. Originally, the scientists suspected the hormone oestrogen to play a mayor part in this process, because it has an influence on the brain’s reward system, which is responsible for the pleasant sensations experienced while listening to music. However, non-pregnant women showed constant physiological responses throughout the contraceptive cycle, which made them subject to fluctuations in oestrogen levels. “Either oestrogen levels are generally too low in non-pregnant women, or other physiological changes during pregnancy are responsible for this effect”, explains Fritz.
The researchers suspect that foetuses are conditioned to music perception while still in the womb by the observed intense physiological music responses of the mothers. From 28 weeks, i.e. at the start of the third trimester of pregnancy, the heart rate of the foetus already changes when it hears a familiar song. From 35 weeks, there is even a change in its movement patterns.

Receptive to music

Music can be soothing or stirring, it can make us dance or make us sad. Blood pressure, heartbeat, respiration and even body temperature – music affects the body in a variety of ways. It triggers especially powerful physical reactions in pregnant women. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have discovered that pregnant women compared to their non-pregnant counterparts rate music as more intensely pleasant and unpleasant, associated with greater changes in blood pressure. Music appears to have an especially strong influence on pregnant women, a fact that may relate to a prenatal conditioning of the fetus to music.

For their study, the Max Planck researchers played short musical sequences of 10 or 30 seconds’ duration to female volunteers. They changed the passages and played them backwards or incorporated dissonances. By doing so, they distorted the originally lively instrumental pieces and made listening to them less pleasant.

The pregnant women rated the pieces of music slightly differently, they perceived the pleasant music as more pleasant and the unpleasant as more unpleasant. The blood pressure response to music was much stronger in the pregnant group. Forward-dissonant music produced a particularly pronounced fall in blood pressure, whereas backwards-dissonant music led to a higher blood pressure after 10 seconds and a lower one after 30 seconds. “Thus, unpleasant music does not cause an across-the-board increase in blood pressure, unlike some other stress factors”, says Tom Fritz of the Max Planck Institute in Leipzig. “Instead, the body’s response is just as dynamic as the music itself.”

According to the results, music is a very special stimulus for pregnant women, to which they react strongly. “Every acoustic manipulation of music affects blood pressure in pregnant women far more intensely than in non-pregnant women”, says Fritz.  Why music has such a strong physiological influence on pregnant woman is still unknown. Originally, the scientists suspected the hormone oestrogen to play a mayor part in this process, because it has an influence on the brain’s reward system, which is responsible for the pleasant sensations experienced while listening to music. However, non-pregnant women showed constant physiological responses throughout the contraceptive cycle, which made them subject to fluctuations in oestrogen levels. “Either oestrogen levels are generally too low in non-pregnant women, or other physiological changes during pregnancy are responsible for this effect”, explains Fritz.

The researchers suspect that foetuses are conditioned to music perception while still in the womb by the observed intense physiological music responses of the mothers. From 28 weeks, i.e. at the start of the third trimester of pregnancy, the heart rate of the foetus already changes when it hears a familiar song. From 35 weeks, there is even a change in its movement patterns.

Filed under music pregnancy blood pressure estrogen reward system neuroscience science

252 notes

Antidepressant use in pregnancy may be associated with structural changes in the infant brain

A new study by University of North Carolina at Chapel Hill researchers found that children of depressed mothers treated with a group of antidepressants called selective serotonin reuptake inhibitors (SSRIs) during pregnancy were more likely to develop Chiari type 1 malformations than were children of mothers with no history of depression.

However, the researchers cautioned, doctors treating pregnant women for depression should not change their prescribing practices based on the results of this study.

“Our results can be interpreted two ways,” said Rebecca Knickmeyer, PhD, assistant professor of psychiatry in the UNC School of Medicine and lead author of the study published May 19 in the journal Neuropsychopharmacology. “Either SSRIs increase risk for Chiari type 1 malformations, or other factors associated with SSRI treatment during pregnancy, such as severity of depression itself, increase risk. Additional research into the effects of depression during pregnancy, with and without antidepressant treatment is urgently needed.”

A Chiari type 1 malformation is a condition in which brain tissue in the cerebellum (a part of the brain that controls balance, motor systems, and some cognitive functions) extends into the spinal canal. About 5 percent of children have a Chiari type 1 malformation. Most do not have any problems because of it, but some develop symptoms such as headache and balance problems. In severe cases surgery may be necessary.

The study’s results are based on an analysis of magnetic resonance imaging (MRI) brain scans done on four groups of children at UNC Hospitals. Thirty-three children whose mothers were diagnosed with depression and took SSRI antidepressant medications, such as sertraline and fluoxetine, were compared to 66 children whose mothers had no history of depression. In addition, 30 children whose mothers were diagnosed with depression but did not take SSRIs were compared to 60 children whose mothers had no history of depression.

Eighteen percent of the children whose mothers took SSRIs during pregnancy had Chiari type 1 malformations, compared to 3 percent among children whose mothers had no history of depression. The rate of Chiari type 1 malformations was highest in children whose mothers reported a family history of depression in addition to treatment with SSRIs during pregnancy, suggesting an important role for genes as well as environment. Duration of SSRI exposure and SSRI exposure at conception also appeared to increase risk.

“These results raise many interesting questions, and there are many things we still don’t know,” said study co-author Samantha Meltzer-Brody, MD, MPH, associate professor of psychiatry in the UNC School of Medicine and director of UNC’s Perinatal Psychiatry Program. “For example, we do not know how many of these children will go on to develop symptoms of Chiari type 1 malformations. What we do know is that untreated depression can be very harmful for women and their babies, and so we strongly encourage pregnant women who are being treated for depression to continue with their treatment,” she said.

Knickmeyer said that a decision to use antidepressants during pregnancy must be based on the balance between risks and benefits and that it is critical that health care providers and the public get accurate information on this topic. She also noted that a diagnosis of Chiari Type 1 is often delayed due to the non-specific nature of the symptoms. Thus, it may be valuable for families in this situation to know about the results of this study.

In addition, “Chiari type 1 malformations are somewhat common, but very little is known about what causes them,” said study co-author J. Keith Smith, MD, PhD, professor and vice chair of clinical research in UNC’s Department of Radiology. “Studies like this could give us new insight into that question.”

Filed under antidepressants SSRIs chiari I malformations pregnancy depression neuroscience science

124 notes

Cognitive test can differentiate between Alzheimer’s and normal aging

Researchers have developed a new cognitive test that can better determine whether memory impairments are due to very mild Alzheimer’s disease or the normal aging process.

Their study appears in the journal Neuropsychologia.

The Alzheimer’s Association estimates that the number of Americans living with Alzheimer’s disease will increase from 5 million in 2014 to as many as 16 million by 2050. Memory impairments and other early symptoms of Alzheimer’s are often difficult to differentiate from the effects of normal aging, making it hard for doctors to recommend treatment for those affected until the disease has progressed substantially.

Previous studies have shown that a part of the brain called the hippocampus is important to relational memory – the “ability to bind together various items of an event,” said Jim Monti, a University of Illinois postdoctoral research associate who led the work with psychology professor Neal Cohen, who is affiliated with the Beckman Institute at Illinois. Being able to connect a person’s name with his or her face is one example of relational memory. These two pieces of information are stored in different parts of the brain, but the hippocampus “binds” them so that the next time you see that person, you remember his or her name, Monti said.

Previous research has shown that people with Alzheimer’s disease often have impairments in hippocampal function. So the team designed a task that tested participants’ relational memory abilities.

Participants were shown a circle divided into three parts, each having a unique design. Similar to the process of name-and-face binding, the hippocampus works to bind these three pieces of the circle together. After the participants studied a circle, they would pick its exact match from a series of 10 circles, presented one at a time.

People with very mild Alzheimer’s disease did worse overall on the task than those in the healthy aging group, who, in turn, did worse than a group of young adults. The task also revealed an additional memory impairment unique to those with very mild Alzheimer’s disease, indicating the changes in cognition that result from Alzheimer’s are qualitatively different than healthy aging. This unique impairment allows researchers to statistically differentiate between those who did and those who did not have Alzheimer’s more accurately than some of the classical tests used for Alzheimer’s diagnosis, Monti said.

“That was illuminating and will serve to inform future work aimed at understanding and detecting the earliest cognitive manifestations of Alzheimer’s disease,” Monti said.

Although this new tool could eventually be used in clinical practice, more studies need to be done to refine the test, he said.

“We’d like to eventually study populations with fewer impairments and bring in neuroimaging techniques to better understand the initial changes in brain and cognition that are due to Alzheimer’s disease,” Monti said.

Filed under aging alzheimer's disease hippocampus psychology neuroscience science

103 notes

Compound Reverses Symptoms of Alzheimer’s Disease in Mice

A molecular compound developed by Saint Louis University scientists restored learning, memory and appropriate behavior in a mouse model of Alzheimer’s disease, according to findings in the May issue of the Journal of Alzheimer’s Disease. The molecule also reduced inflammation in the part of the brain responsible for learning and memory.

The paper, authored by a team of scientists led by Susan Farr, Ph.D., research professor of geriatrics at Saint Louis University, is the second mouse study that supports the potential therapeutic value of an antisense compound in treating Alzheimer’s disease in humans.

"It reversed learning and memory deficits and brain inflammation in mice that are genetically engineered to model Alzheimer’s disease," Farr said. "Our current findings suggest that the compound, which is called antisense oligonucleotide (OL-1), is a potential treatment for Alzheimer’s disease."

Farr cautioned that the experiment was conducted in a mouse model. Like any drug, before an antisense compound could be tested in human clinical trials, toxicity tests need to be completed.

Antisense is a strand of molecules that bind to messenger RNA, launching a cascade of cellular events that turns off a certain gene.

In this case, OL-1 blocks the translation of RNA, which triggers a process that keeps excess amyloid beta protein from being produced. The specific antisense significantly decreased the overexpression of a substance called amyloid beta protein precursor, which normalized the amount of amyloid beta protein in the body. Excess amyloid beta protein is believed to be partially responsible for the formation of plaque in the brain of patients who have Alzheimer’s disease.

Scientists tested OL-1 in a type of mouse that overexpresses a mutant form of the human amyloid beta precursor gene. Previously they had tested the substance in a mouse model that has a natural mutation causing it to overproduce mouse amyloid beta. Like people who have Alzheimer’s disease, both types of mice have age-related impairments in learning and memory, elevated levels of amyloid beta protein that stay in the brain and increased inflammation and oxidative damage to the hippocampus — the part of the brain responsible for learning and memory.

"To be effective in humans, OL-1 would need to be effective at suppressing production of human amyloid beta protein," Farr said.

Scientists compared the mice that were genetically engineered to overproduce human amyloid beta protein with a wild strain, which served as the control. All of the wild strain received random antisense, while about half of the genetically engineered mice received random antisense and half received OL-1.

The mice were given a series of tests designed to measure memory, learning and appropriate behavior, such as going through a maze, exploring an unfamiliar location and recognizing an object.

Scientists found that learning and memory improved in the genetically engineered mice that received OL-1 compared to the genetically engineered mice that received random antisense. Learning and memory were the same among genetically engineered mice that received OL-1 and wild mice that received random antisense.

They also tested the effect of administering the drug through the central nervous system, so it crossed the blood brain barrier to enter the brain directly, and of giving it through a vein in the tail, so it circulated through the bloodstream in the body. They found where the drug was injected had little effect on learning and memory.

"Our findings reinforced the importance of amyloid beta protein in the Alzheimer’s disease process. They suggest that an antisense that targets the precursor to amyloid beta protein is a potential therapy to explore to reversing symptoms of Alzheimer’s disease," Farr said.

(Source: slu.edu)

Filed under alzheimer's disease antisense oligonucleotide memory inflammation oxidative stress neuroscience science

177 notes

Altruism/egoism: a question of points of view
Different brain structures are at the basis of these behaviours
Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.
It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.
Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.
“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.
Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.
“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.
“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Altruism/egoism: a question of points of view

Different brain structures are at the basis of these behaviours

Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.

It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.

Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.

“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.

Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.

“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.

“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Filed under prosocial behavior brain activity virtual reality salience network prefrontal cortex neuroscience science

122 notes

Alpha waves organize a to-do list for the brain

In his search to understand the role and function of brain waves, neuroscientist Ole Jensen (Radboud University) postulates a new theory on how the alpha wave controls attention to visual signals. His theory is published in Trends in Neurosciences on May 20. Alpha waves appear to be even more active and important than Jensen already thought.

image

Our brain cells ‘spark’ all the time. From this electronic activity brain waves emerge: oscillations at different band widths. And like a radio station uses different frequencies to carry specific information far away from the emitting source, so does the brain. And just like radio listeners with a certain musical preference tune in to the frequency that carries the music they prefer, brain area’s tune into the wave length relevant for their functioning.

Alpha waves aren’t boring
Ole Jensen, professor of Neuronal Oscillations at Radboud University’s Donders Institute for Brain, Cognition and Behaviour, tries to figure out how this network of sending and receiving information through oscillations works in detail. Earlier he discovered a novel role of the alpha wave that was long thought to be a boring wave, emerging when the brain runs idle and a person is dozing off. Jensen shifted this interpretation by showing the importance of the alpha frequency: it helps to shut down irrelevant brain area’s for a certain task. It helps us concentrate on what is really important at that moment.

To do list
In the Trends in Neurosciences paper that appeared today, Jensen postulates a new theory for how this actually works given a visual task. ‘We think that different phases of the alpha wave encode for different parts of a visual scene. It helps breaking down the visual information into small jobs and then perform those tasks in a specific order. A to do list for your visual attention system: focus on the face, focus on the hand, focus on the glass, look around. And then all over again.’

Jensen is now planning to test this new interpretation of the alpha wave in both animals and humans.

(Source: ru.nl)

Filed under brainwaves alpha oscillations visual attention visual processing neuroscience science

free counters