Neuroscience

Articles and news from the latest research reports.

175 notes

Old memories recombine to give a taste of the unknown

Ever tried beetroot custard? Probably not, but your brain can imagine how it might taste by reactivating old memories in a new pattern.

image

Helen Barron and her colleagues at University College London and Oxford University wondered if our brains combine existing memories to help us decide whether to try something new.

So the team used an fMRI scanner to look at the brains of 19 volunteers who were asked to remember specific foods they had tried.

Each volunteer was then given a menu of 13 unusual food combinations – including beetroot custard, tea jelly, and coffee yoghurt – and asked to imagine how good or bad they would taste, and whether or not they would eat them.

"Tea jelly was popular," says Barron. "Beetroot custard not so much."

When each volunteer imagined a new combination, they showed brain activity associated with each of the known ingredients at the same time. It is the first evidence to suggest that we use memory combination to make decisions, says Barron.

(Source: newscientist.com)

Filed under decision making memory medial prefrontal cortex hippocampus neuroscience science

111 notes

Genetic breakthrough another step to understanding schizophrenia

A consortium of scientists from 20 countries, including researchers from The University of Western Australia, has made a major breakthrough in understanding the genetic basis of the debilitating disorder, schizophrenia.

More than 175 scientists from 99 institutions across Europe, the United States of America and Australia contributed to a genome-wide association analysis which identified 13 new risk loci for schizophrenia.

In an article published in the journal, Nature Genetics, the study authors write that the results provide deeper insight into the genetic architecture of schizophrenia than ever before achieved, and provide a pathway to further research.

"For the first time, there is a clear path to increased knowledge of the etiology of schizophrenia through the application of standard, off-the-shelf genomic technologies for elucidating the effects of common variation," the authors wrote.

Schizophrenia is a complex mental disorder which affects about one per cent of people over their lifetime, leading to prolonged or recurrent episodes that impair severely social functioning and quality of life.

In terms of the ‘global burden of disease and disability’ index, developed by the World Health Organization, it ranks among the top 10 disorders, along with cancer, heart disease, diabetes and other non-communicable diseases.

Winthrop Professor Assen Jablensky, director of UWA’s Centre for Clinical Research in Neuropsychiatry (CCRN) at Graylands Hospital, and Professor Luba Kalaydjieva, of the UWA-affiliated Western Australian Institute for Medical Research (WAIMR), led the UWA research team which took part in the study.

Professor Jablensky said that while a strong genetic component in the causation of schizophrenia had been well established, the role of specific genes and the mechanisms of their regulation remained largely unknown.

"Until recently, results of genetic linkage and association studies could explain only a small fraction of the estimated heritability of the disorder and of its ‘genetic architecture’," Professor Jablensky said.

However recent technological advances, enabling efficient coverage of the entire human genome with millions of single nucleotide polymorphisms (SNPs) as genetic markers, had given rise to a new generation of genome-wide association studies (GWAS), which trace the DNA differences between people affected with the disease and healthy control individuals.

"Since the effects of individual SNPs are quite tiny, their reliable measurement requires very large samples of adequately diagnosed patients and controls," Professor Jablensky said.

"This recent study reports on a major breakthrough in the understanding of the genetic basis of schizophrenia, achieved through meta-analysis of GWAS datasets contributed by a large international Psychiatric Genomics Consortium (PGC) - which includes the UWA research team."

A WA case-control sample consisting of 893 schizophrenia patients and healthy controls was part of a collection of 21,246 schizophrenia cases and 38,072 controls from 19 research centres and consortia across Europe, Australia and the USA.

The study found that a total of 8300 SNPs contribute to the risk for schizophrenia and account for at least 32 per cent of the variance in liability.

"A particularly important result of this study is that many of these SNPs are located on a molecular pathway involved in neuronal calcium signalling, which suggests a novel pathogenetic link in the causation of schizophrenia and possibly other psychotic disorders," Professor Jablensky said.

He said ongoing and future studies by the UWA research team would aim to further refine the genetic analyses of the WA schizophrenia study (which at present includes 1259 persons), and to test neurobiological hypotheses about the treatment responses of genetically defined subsets of patients. 

(Source: news.uwa.edu.au)

Filed under schizophrenia GWAS genetics neuroscience science

57 notes

Capturing brain activity with sculpted light
Researchers in Vienna develop new imaging technique to study the function of entire nervous systems. Scientists at the Campus Vienna Biocenter (Austria) have found a way to overcome some of the limitations of light microscopy. Applying the new technique, they can record the activity of a worm’s brain with high temporal and spatial resolution, ultimately linking brain anatomy to brain function. The journal Nature Methods publishes the details in its current issue.
A major aim of today’s neuroscience is to understand how an organism’s nervous system processes sensory input and generates behavior. To achieve this goal, scientists must obtain detailed maps of how the nerve cells are wired up in the brain, as well as information on how these networks interact in real time.
The organism many neuroscientists turn to in order to study brain function is a tiny, transparent worm found in rotting soil. The simple nematode C. elegans is equipped with just 302 neurons that are connected by roughly 8000 synapses. It is the only animal for which a complete nervous system has been anatomically mapped.
Researchers have so far focused on studying the activity of single neurons and small networks in the worm, but have not been able to establish a functional map of the entire nervous system. This is mainly due to limitations in the imaging-techniques they employ: the activity of single cells can be resolved with high precision, but simultaneously looking at the function of all neurons that comprise entire brains has been a major challenge. Thus, there was always a trade-off between spatial or temporal accuracy and the size of brain regions that could be studied.
Scientists at Vienna’s Research Institute of Molecular Pathology (IMP), the Max Perutz Laboratories (MFPL), and the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna have now closed this gap and developed a high speed imaging technique with single neuron resolution that bypasses these limitations. In a paper published online in Nature Methods, the teams of Alipasha Vaziri and Manuel Zimmer describe the technique which is based on their ability to “sculpt” the three-dimensional distribution of light in the sample. With this new kind of microscopy, they are able to record the activity of 70% of the nerve cells in a worm’s head with high spatial and temporal resolution. 
“Previously, we would have to scan the focused light by the microscope in all three dimensions”, says quantum physicist Robert Prevedel. “That takes far too long to record the activity of all neurons at the same time. The trick we invented tinkers with the light waves in a way that allows us to generate “discs” of light in the sample. Therefore, we only have to scan in one dimension to get the information we need. We end up with three-dimensional videos that show the simultaneous activities of a large number of neurons and how they change over time.” Robert Prevedel is a Senior Postdoc in the lab of Alipasha Vaziri, who is an IMP-MFPL Group Leader and is heading the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna, where the new technique was developed.
However, the new microscopic method is only half the story. Visualising the neurons requires tagging them with a fluorescent protein that lights up when it binds to calcium, signaling the nerve cells’ activity. “The neurons in a worm’s head are so densely packed that we could not distinguish them on our first images”, explains neurobiologist Tina Schrödel, co-first author of the study. “Our solution was to insert the calcium sensor into the nuclei rather than the entire cells, thereby sharpening the image so we could identify single neurons.” Tina Schrödel is a Doctoral Student in the lab of the IMP Group Leader Manuel Zimmer.
The new technique that came about by a close collaboration of physicists and neurobiologists has great potentials beyond studies in worms, according to the researchers. It will open up the way for experiments that were not possible before. One of the questions that will be addressed is how the brain processes sensory information to “plan” specific movements and then executes them. This ambitious project will require further refinement of both the microscopy methods and computational methods in order to study freely moving animals. The team in Vienna is set to achieve this goal in the coming two years. 

Capturing brain activity with sculpted light

Researchers in Vienna develop new imaging technique to study the function of entire nervous systems. Scientists at the Campus Vienna Biocenter (Austria) have found a way to overcome some of the limitations of light microscopy. Applying the new technique, they can record the activity of a worm’s brain with high temporal and spatial resolution, ultimately linking brain anatomy to brain function. The journal Nature Methods publishes the details in its current issue.

A major aim of today’s neuroscience is to understand how an organism’s nervous system processes sensory input and generates behavior. To achieve this goal, scientists must obtain detailed maps of how the nerve cells are wired up in the brain, as well as information on how these networks interact in real time.

The organism many neuroscientists turn to in order to study brain function is a tiny, transparent worm found in rotting soil. The simple nematode C. elegans is equipped with just 302 neurons that are connected by roughly 8000 synapses. It is the only animal for which a complete nervous system has been anatomically mapped.

Researchers have so far focused on studying the activity of single neurons and small networks in the worm, but have not been able to establish a functional map of the entire nervous system. This is mainly due to limitations in the imaging-techniques they employ: the activity of single cells can be resolved with high precision, but simultaneously looking at the function of all neurons that comprise entire brains has been a major challenge. Thus, there was always a trade-off between spatial or temporal accuracy and the size of brain regions that could be studied.

Scientists at Vienna’s Research Institute of Molecular Pathology (IMP), the Max Perutz Laboratories (MFPL), and the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna have now closed this gap and developed a high speed imaging technique with single neuron resolution that bypasses these limitations. In a paper published online in Nature Methods, the teams of Alipasha Vaziri and Manuel Zimmer describe the technique which is based on their ability to “sculpt” the three-dimensional distribution of light in the sample. With this new kind of microscopy, they are able to record the activity of 70% of the nerve cells in a worm’s head with high spatial and temporal resolution. 

“Previously, we would have to scan the focused light by the microscope in all three dimensions”, says quantum physicist Robert Prevedel. “That takes far too long to record the activity of all neurons at the same time. The trick we invented tinkers with the light waves in a way that allows us to generate “discs” of light in the sample. Therefore, we only have to scan in one dimension to get the information we need. We end up with three-dimensional videos that show the simultaneous activities of a large number of neurons and how they change over time.” Robert Prevedel is a Senior Postdoc in the lab of Alipasha Vaziri, who is an IMP-MFPL Group Leader and is heading the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna, where the new technique was developed.

However, the new microscopic method is only half the story. Visualising the neurons requires tagging them with a fluorescent protein that lights up when it binds to calcium, signaling the nerve cells’ activity. “The neurons in a worm’s head are so densely packed that we could not distinguish them on our first images”, explains neurobiologist Tina Schrödel, co-first author of the study. “Our solution was to insert the calcium sensor into the nuclei rather than the entire cells, thereby sharpening the image so we could identify single neurons.” Tina Schrödel is a Doctoral Student in the lab of the IMP Group Leader Manuel Zimmer.

The new technique that came about by a close collaboration of physicists and neurobiologists has great potentials beyond studies in worms, according to the researchers. It will open up the way for experiments that were not possible before. One of the questions that will be addressed is how the brain processes sensory information to “plan” specific movements and then executes them. This ambitious project will require further refinement of both the microscopy methods and computational methods in order to study freely moving animals. The team in Vienna is set to achieve this goal in the coming two years. 

Filed under brain function nerve cells C. elegans nervous system neural activity neuroscience science

144 notes

Do glial connectomes and activity maps make any sense?
"If all you have is a hammer, everything looks like a nail." This so-called "law of the instrument" has shaped neuroscience to core. It can be rephrased as, if all you have a fancy voltmeter, everything looks like a transient electrical event. No one in the field understands this more Douglass Fields, an NIH researcher who has re-written every neuroscience dogma he has turned his scrupulous eye to. In a paper published yesterday in Nature, Fields questions the conventional wisdom that informs recent efforts to map the brain’s connectivity, and ultimately, its electrical activity. In particular, he questions the value of making detailed maps of neurons, while at the same time neglecting the more abundant, and equally complex “maps” that exist for glia.
When first discovered, the “action potential” generated by a neuron was a rich and multiphysical event. It has since degenerated into a sterile, directionally-rectified electrical blip, whose only interesting parameter is a millisecond-scrutinized timestamp. In the last two years alone, Fields has re-generalized the spike. Having highlighted many of the fine scale physical events that accompany a neuron’s firing, like temperature and volume changes, optical effects, displacement, and myriad nonsynaptic effects, Fields demonstrated the intimate knitting of reverse propagating spikes into the behavior and function of neuronal networks. He also showed how spikes directly control non-neuronal events, in particular, myelination.

The Eyewire project at MIT is a fantastic effort to create detailed neuronal maps—it expands neuroscience to the larger community, and generates much worthwhile scientific spin-off. It is also completely absurd. To have so much talk about brain maps without drawing clear distinction between the glaring contrast in the value of white matter maps and grey matter maps is telling. Maps of the white matter will be indespensible to understanding our own brains. They are highly personal, yet at the same time will be one of the most valuable things we might soon come to share. For the moment here, we can liken them to the subway or transportation map of a complex city.

To try and map the grey matter, at least in our foreseeable era, is to attempt to record the comings and goings of all the people entering and exiting the doors of the trains of our subway system. Not only is the task infinitely harder, pound for pound, it is equally less valuable, and impermanent. Looked at another way, if we imagine some hyper-detailed ecologist mapping the different trees in a forest, one valuable piece of information to have would be the tree species or type. Their age, size, density and distribution would similarly be worthwhile parameters. Also maybe some detail about their finer structure would be predictive of what kind of animals species might live and move about their arbors. Eyewire, on the other hand, is mapping every twig down to the finest termination as a leaf. The problem is that leaves are shed and regenerated anew each year, and while Eyewire might map a few neurons in the same time, synapses morph to a faster drum.
The point of Field’s article is that glial trees have exactly the same level of detail and importance as neural trees, yet they are ignored in the aspirations of the connectomists. In fact, if neurons are like deciduous tress, with long, unpredictable, idiosyncratic and internexed branches, then glial cells, particularly astrocytes, are very much like conifers—they rigidly span nonoverlapping domains in the grey matter, in prototypical, scaffolded form, and with frequently symmetric repeatable structure. If we accept the results of neuroanatomy at face value here, grey matter might be imagined more like an astrocytic christmas tree farm superimposed on a neural rainforest. Stepping back, if given a choice between a grey matter connectome, and a white matter myelome, the latter is undoubtedly where the focus should be for now.

It may be a misstep in our study of glial cells to narrow-mindedly attempt to define for them, only that which has already been defined for neurons. The literature consists largely of a reattribution of transmitter or other chemical mechanisms of neurons to glia. The exceptioned qualifier here is that the speed of these processes—their electricality, directionality and extreme spatial aspect—is not a general feature of glial cells. For glial cells, new mechanisms need to be explored, and the most obvious among them perhaps, is that many of them, particularly the microglial cells, like to move.

It is increasingly appreciated nowadays, that much of the 10 or so watts attributed to the brain for its power budget, is purposed for things other then sending spikes and maintaining static electrical potentials. In the home, we can save on energy by dimming the lights, but to really make a dent, we need to turn off the things that move—things like fans, or the pumps in the HVAC systems. Much of the actual flow and motion inside the cerebral hive is transduced through glial cells. Undoubtedly axons drag diluent down their extent as they transport organelles across improbably expanses, and expel pressurized boluses of irritant (there may in fact be much to be said for an analogy with leaves powering fluid conduction in trees through local evaporation). It is however, the glial cells that seem to be the heavy lifters involved in flow. Transducing hand-picked intracellular flow, and bulk extracellular flow, sourced from the vasculature to neurons, they complete the so-called glymphatic circuit.

To be strict, perhaps we need to refigure this estimate of 10 watts, expanding it to include non-chemical sources, like the input of hydraulic power into the brain via the heart. If, for example, the brain consumes 20% of the flow from the heart, it also dissipates around 20% of the 100 or more watts of power generated by the heart. That should in fact be a significant contribution. By some estimates, we may have around 100,000 miles of myelinated axons in our brains, all surrounded by glial cells. Similarly, we may have the same amount, 100,000 miles, of capillary in the brain, all surrounded by astrocytic endfeet. Considering the scale of these numbers, it may be useful to start to look at the brain as more of a fluid-transporting machine, as opposed to mainly an electrical device.
The evidence is fairly clear that at the sensory and motor levels, spikes conduct much of the information about a stimulus or movement, particularly the short time scale components of that information. In moving more centrally from both sensory and motor ends, spikes tend to unhinge from real world metrics. If we are not careful to consider what neurons might actually be doing at a more global, physiologic level when they generate and propagate spikes, we may find that while we believe we are recording signals, we are actually just recording the noise of the pumps.

Do glial connectomes and activity maps make any sense?

"If all you have is a hammer, everything looks like a nail." This so-called "law of the instrument" has shaped neuroscience to core. It can be rephrased as, if all you have a fancy voltmeter, everything looks like a transient electrical event. No one in the field understands this more Douglass Fields, an NIH researcher who has re-written every neuroscience dogma he has turned his scrupulous eye to. In a paper published yesterday in Nature, Fields questions the conventional wisdom that informs recent efforts to map the brain’s connectivity, and ultimately, its electrical activity. In particular, he questions the value of making detailed maps of neurons, while at the same time neglecting the more abundant, and equally complex “maps” that exist for glia.

When first discovered, the “action potential” generated by a neuron was a rich and multiphysical event. It has since degenerated into a sterile, directionally-rectified electrical blip, whose only interesting parameter is a millisecond-scrutinized timestamp. In the last two years alone, Fields has re-generalized the spike. Having highlighted many of the fine scale physical events that accompany a neuron’s firing, like temperature and volume changes, optical effects, displacement, and myriad nonsynaptic effects, Fields demonstrated the intimate knitting of reverse propagating spikes into the behavior and function of neuronal networks. He also showed how spikes directly control non-neuronal events, in particular, myelination.

The Eyewire project at MIT is a fantastic effort to create detailed neuronal maps—it expands neuroscience to the larger community, and generates much worthwhile scientific spin-off. It is also completely absurd. To have so much talk about brain maps without drawing clear distinction between the glaring contrast in the value of white matter maps and grey matter maps is telling. Maps of the white matter will be indespensible to understanding our own brains. They are highly personal, yet at the same time will be one of the most valuable things we might soon come to share. For the moment here, we can liken them to the subway or transportation map of a complex city.

To try and map the grey matter, at least in our foreseeable era, is to attempt to record the comings and goings of all the people entering and exiting the doors of the trains of our subway system. Not only is the task infinitely harder, pound for pound, it is equally less valuable, and impermanent. Looked at another way, if we imagine some hyper-detailed ecologist mapping the different trees in a forest, one valuable piece of information to have would be the tree species or type. Their age, size, density and distribution would similarly be worthwhile parameters. Also maybe some detail about their finer structure would be predictive of what kind of animals species might live and move about their arbors. Eyewire, on the other hand, is mapping every twig down to the finest termination as a leaf. The problem is that leaves are shed and regenerated anew each year, and while Eyewire might map a few neurons in the same time, synapses morph to a faster drum.

The point of Field’s article is that glial trees have exactly the same level of detail and importance as neural trees, yet they are ignored in the aspirations of the connectomists. In fact, if neurons are like deciduous tress, with long, unpredictable, idiosyncratic and internexed branches, then glial cells, particularly astrocytes, are very much like conifers—they rigidly span nonoverlapping domains in the grey matter, in prototypical, scaffolded form, and with frequently symmetric repeatable structure. If we accept the results of neuroanatomy at face value here, grey matter might be imagined more like an astrocytic christmas tree farm superimposed on a neural rainforest. Stepping back, if given a choice between a grey matter connectome, and a white matter myelome, the latter is undoubtedly where the focus should be for now.

It may be a misstep in our study of glial cells to narrow-mindedly attempt to define for them, only that which has already been defined for neurons. The literature consists largely of a reattribution of transmitter or other chemical mechanisms of neurons to glia. The exceptioned qualifier here is that the speed of these processes—their electricality, directionality and extreme spatial aspect—is not a general feature of glial cells. For glial cells, new mechanisms need to be explored, and the most obvious among them perhaps, is that many of them, particularly the microglial cells, like to move.

It is increasingly appreciated nowadays, that much of the 10 or so watts attributed to the brain for its power budget, is purposed for things other then sending spikes and maintaining static electrical potentials. In the home, we can save on energy by dimming the lights, but to really make a dent, we need to turn off the things that move—things like fans, or the pumps in the HVAC systems. Much of the actual flow and motion inside the cerebral hive is transduced through glial cells. Undoubtedly axons drag diluent down their extent as they transport organelles across improbably expanses, and expel pressurized boluses of irritant (there may in fact be much to be said for an analogy with leaves powering fluid conduction in trees through local evaporation). It is however, the glial cells that seem to be the heavy lifters involved in flow. Transducing hand-picked intracellular flow, and bulk extracellular flow, sourced from the vasculature to neurons, they complete the so-called glymphatic circuit.

To be strict, perhaps we need to refigure this estimate of 10 watts, expanding it to include non-chemical sources, like the input of hydraulic power into the brain via the heart. If, for example, the brain consumes 20% of the flow from the heart, it also dissipates around 20% of the 100 or more watts of power generated by the heart. That should in fact be a significant contribution. By some estimates, we may have around 100,000 miles of myelinated axons in our brains, all surrounded by glial cells. Similarly, we may have the same amount, 100,000 miles, of capillary in the brain, all surrounded by astrocytic endfeet. Considering the scale of these numbers, it may be useful to start to look at the brain as more of a fluid-transporting machine, as opposed to mainly an electrical device.

The evidence is fairly clear that at the sensory and motor levels, spikes conduct much of the information about a stimulus or movement, particularly the short time scale components of that information. In moving more centrally from both sensory and motor ends, spikes tend to unhinge from real world metrics. If we are not careful to consider what neurons might actually be doing at a more global, physiologic level when they generate and propagate spikes, we may find that while we believe we are recording signals, we are actually just recording the noise of the pumps.

Filed under glial cells brain mapping connectome neuroscience science

62 notes

Research yields first detailed view of morphing Parkinson’s protein
Researchers have taken detailed images and measurements of the morphing structure of a brain protein thought to play a role in Parkinson’s disease, information that could aid the development of medications to treat the condition.
The protein, called alpha synuclein (pronounced sine-yoo-cline), ordinarily exists in a globular shape. However, the protein morphs into harmful structures known as amyloid fibrils, which are linked to protein molecules that form in the brains of patients with neurodegenerative diseases.
"The abnormal protein formation characterizes a considerable number of human diseases, such as Alzheimer’s, Parkinson’s and Huntington’s diseases and type II diabetes," said Lia Stanciu, an associate professor of materials engineering at Purdue University.
Until now, the transition from globular to fibrils had not been captured and measured.
Researchers incubated the protein in a laboratory and then used an electron microscope and a technique called cryoelectron microscopy to snap thousands of pictures over 24 hours, capturing its changing shape. The protein was frozen at specific time intervals with liquid nitrogen.
Findings reveal that the protein morphs from its globular shape into “protofibril” strands that assemble into pore-like rings. These rings then open up, forming pairs of protofibrils that assemble into fibrils through hydrogen bonds.
"We found a correlation between protofibrils in these rings and the fibrils, for the first time to our knowledge, by measuring their true sizes and visualizing the aggregation steps," Stanciu said. "A better understanding of the mechanism yields fresh insight into the pathogenesis of amyloid-related diseases and may provide us the opportunity to develop additional therapeutic strategies."
Parkinson’s disease affects 1 percent to 2 percent of people older than 60, and an increase in its prevalence is anticipated in coming decades.
The findings were detailed in a research paper appearing in the June issue of the Biophysical Journal. The paper was authored by doctoral student Hangyu Zhang; former postdoctoral research associate Amy Griggs; Jean-Christophe Rochet, an associate professor of medicinal chemistry and molecular pharmacology; and Stanciu.
The researchers caused the protein to morph into fibrils by exposing it to copper, mimicking what happens when people are exposed to lead and other heavy metals. The contaminants interfere with the protein, changing the oxidation states of ions in its structure.
Reference:
Hangyu Zhang, Amy Griggs, Jean-Christophe Rochet, and Lia A. Stanciu. In Vitro Study of a-Synuclein Protofibrils by Cryo-EM Suggests a Cu2D-Dependent Aggregation Pathway. Biophysical Journal, 2013 (in press)

Research yields first detailed view of morphing Parkinson’s protein

Researchers have taken detailed images and measurements of the morphing structure of a brain protein thought to play a role in Parkinson’s disease, information that could aid the development of medications to treat the condition.

The protein, called alpha synuclein (pronounced sine-yoo-cline), ordinarily exists in a globular shape. However, the protein morphs into harmful structures known as amyloid fibrils, which are linked to protein molecules that form in the brains of patients with neurodegenerative diseases.

"The abnormal protein formation characterizes a considerable number of human diseases, such as Alzheimer’s, Parkinson’s and Huntington’s diseases and type II diabetes," said Lia Stanciu, an associate professor of materials engineering at Purdue University.

Until now, the transition from globular to fibrils had not been captured and measured.

Researchers incubated the protein in a laboratory and then used an electron microscope and a technique called cryoelectron microscopy to snap thousands of pictures over 24 hours, capturing its changing shape. The protein was frozen at specific time intervals with liquid nitrogen.

Findings reveal that the protein morphs from its globular shape into “protofibril” strands that assemble into pore-like rings. These rings then open up, forming pairs of protofibrils that assemble into fibrils through hydrogen bonds.

"We found a correlation between protofibrils in these rings and the fibrils, for the first time to our knowledge, by measuring their true sizes and visualizing the aggregation steps," Stanciu said. "A better understanding of the mechanism yields fresh insight into the pathogenesis of amyloid-related diseases and may provide us the opportunity to develop additional therapeutic strategies."

Parkinson’s disease affects 1 percent to 2 percent of people older than 60, and an increase in its prevalence is anticipated in coming decades.

The findings were detailed in a research paper appearing in the June issue of the Biophysical Journal. The paper was authored by doctoral student Hangyu Zhang; former postdoctoral research associate Amy Griggs; Jean-Christophe Rochet, an associate professor of medicinal chemistry and molecular pharmacology; and Stanciu.

The researchers caused the protein to morph into fibrils by exposing it to copper, mimicking what happens when people are exposed to lead and other heavy metals. The contaminants interfere with the protein, changing the oxidation states of ions in its structure.

Reference:

Hangyu Zhang, Amy Griggs, Jean-Christophe Rochet, and Lia A. Stanciu. In Vitro Study of a-Synuclein Protofibrils by Cryo-EM Suggests a Cu2D-Dependent Aggregation Pathway. Biophysical Journal, 2013 (in press)

Filed under parkinson's disease alpha synuclein neurodegenerative diseases protein medicine neuroscience science

956 notes

In longterm relationships, the brain makes trust a habit
After someone betrays you, do you continue to trust the betrayer? Your answer depends on the length of the relationship, according to research by sociologist Karen Cook of Stanford University and her colleagues. The researchers found that those who have been deceived early in a relationship use regions of the brain associated with controlled, careful decision making when deciding if they should continue to trust the person who deceived them. However, those betrayed later in a relationship use areas of the brain associated with automatic, habitual decision making, increasing the likelihood of forgiveness. The study appears in the Proceedings of the National Academy of Sciences.
Cook and her team wanted to understand why some people choose to reconcile after they’ve become victims of betrayal, but others don’t. They hypothesized that if the relationship formed recently, the victim will engage in conscious, deliberate problem solving when deciding how to respond to the deceit. On the other hand, if the relationship has existed for a long time, the victim will take trustworthy behavior for granted and consider a breach of trust an exception to the rule.
To test their hypothesis, the team performed an online experiment, using subjects recruited through an internet survey provider. Each subject received eight dollars and could either keep the money or give it to an unseen partner. If the subject gave the money away, its value would triple. The partner would then decide whether to keep it all or give half back to the subject.
Unbeknownst to the subject, the partner was really a computer, sometimes programmed to betray the subject early in the game and sometimes programmed to betray the subject later. Cook’s team found that after an early betrayal, the subject would be more likely to keep the money than after a late betrayal.
When the team repeated the experiment in a laboratory, with subjects hooked up to fMRI scanners, the anterior cingulate cortex, associated with conscious learning, planning and problem solving, and the lateral frontal cortex, associated with feelings of uncertainty, became more active after early betrayal. In contrast, the lateral temporal cortex, associated with habituated decision making, became more active after late betrayal.
As with the first experiment, an early betrayal increased the likelihood of the subject holding onto the money in later rounds. Early betrayal also increased the amount of time taken to make a decision, suggesting that victims of early betrayal were putting more conscious thought into their decisions than victims of late betrayal were.
The researchers hope their study will increase understanding of why some victims of deceit continue to forgive those who deceived them.

In longterm relationships, the brain makes trust a habit

After someone betrays you, do you continue to trust the betrayer? Your answer depends on the length of the relationship, according to research by sociologist Karen Cook of Stanford University and her colleagues. The researchers found that those who have been deceived early in a relationship use regions of the brain associated with controlled, careful decision making when deciding if they should continue to trust the person who deceived them. However, those betrayed later in a relationship use areas of the brain associated with automatic, habitual decision making, increasing the likelihood of forgiveness. The study appears in the Proceedings of the National Academy of Sciences.

Cook and her team wanted to understand why some people choose to reconcile after they’ve become victims of betrayal, but others don’t. They hypothesized that if the relationship formed recently, the victim will engage in conscious, deliberate problem solving when deciding how to respond to the deceit. On the other hand, if the relationship has existed for a long time, the victim will take trustworthy behavior for granted and consider a breach of trust an exception to the rule.

To test their hypothesis, the team performed an online experiment, using subjects recruited through an internet survey provider. Each subject received eight dollars and could either keep the money or give it to an unseen partner. If the subject gave the money away, its value would triple. The partner would then decide whether to keep it all or give half back to the subject.

Unbeknownst to the subject, the partner was really a computer, sometimes programmed to betray the subject early in the game and sometimes programmed to betray the subject later. Cook’s team found that after an early betrayal, the subject would be more likely to keep the money than after a late betrayal.

When the team repeated the experiment in a laboratory, with subjects hooked up to fMRI scanners, the anterior cingulate cortex, associated with conscious learning, planning and problem solving, and the lateral frontal cortex, associated with feelings of uncertainty, became more active after early betrayal. In contrast, the lateral temporal cortex, associated with habituated decision making, became more active after late betrayal.

As with the first experiment, an early betrayal increased the likelihood of the subject holding onto the money in later rounds. Early betrayal also increased the amount of time taken to make a decision, suggesting that victims of early betrayal were putting more conscious thought into their decisions than victims of late betrayal were.

The researchers hope their study will increase understanding of why some victims of deceit continue to forgive those who deceived them.

Filed under decision making trust betrayal frontal cortex psychology neuroscience science

121 notes

Mild B-12 Deficiency May Speed Dementia
Study finds that the vitamin shortage might affect more people than previously thought 

Being even mildly deficient in vitamin B-12 may put older adults at a greater risk for accelerated cognitive decline, an observational study from the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts suggests.
Martha Savaria Morris, an epidemiologist in the Nutrition Epidemiology Program at the HNRCA, and colleagues examined data from 549 men and women enrolled in a cohort of the Framingham Heart Study. The subjects, who had an average age of 75 at the start, were divided into five groups based on their vitamin B-12 blood levels.
Being in the two lowest groups was associated with significantly accelerated cognitive decline, based on scores from dementia screening tests given over eight years.
“Men and women in the second-lowest group did not fare any better in terms of cognitive decline than those with the worst vitamin B-12 blood levels,” Morris says. It is well known that severe B-12 deficiency speeds up dementia, but the finding suggests that even more seniors may be affected.
The study appeared in the Journal of the American Geriatrics Society.
“While we emphasize our study does not show causation, our associations raise the concern that some cognitive decline may be the result of inadequate vitamin B-12 in older adults, for whom maintaining normal blood levels can be a challenge,” says Professor Paul Jacques, the study’s senior author and director of the HNRCA Nutrition Epidemiology Program.
Animal proteins, such as lean meats, poultry and eggs, are good sources of vitamin B-12. Because older adults may have a hard time absorbing vitamin B-12 from food, the USDAʼs 2010 Dietary Guidelines for Americans recommend that people over age 50 incorporate foods fortified with B-12 or supplements in their diets.
The subjects in this study were mostly Caucasian women who had earned at least a high school diploma. The authors said future research might include more diverse populations and explore whether vitamin B-12 status affects particular cognitive skills.
This article first appeared in the Summer 2013 issue of Tufts Nutrition magazine. 

Mild B-12 Deficiency May Speed Dementia

Study finds that the vitamin shortage might affect more people than previously thought

Being even mildly deficient in vitamin B-12 may put older adults at a greater risk for accelerated cognitive decline, an observational study from the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts suggests.

Martha Savaria Morris, an epidemiologist in the Nutrition Epidemiology Program at the HNRCA, and colleagues examined data from 549 men and women enrolled in a cohort of the Framingham Heart Study. The subjects, who had an average age of 75 at the start, were divided into five groups based on their vitamin B-12 blood levels.

Being in the two lowest groups was associated with significantly accelerated cognitive decline, based on scores from dementia screening tests given over eight years.

“Men and women in the second-lowest group did not fare any better in terms of cognitive decline than those with the worst vitamin B-12 blood levels,” Morris says. It is well known that severe B-12 deficiency speeds up dementia, but the finding suggests that even more seniors may be affected.

The study appeared in the Journal of the American Geriatrics Society.

“While we emphasize our study does not show causation, our associations raise the concern that some cognitive decline may be the result of inadequate vitamin B-12 in older adults, for whom maintaining normal blood levels can be a challenge,” says Professor Paul Jacques, the study’s senior author and director of the HNRCA Nutrition Epidemiology Program.

Animal proteins, such as lean meats, poultry and eggs, are good sources of vitamin B-12. Because older adults may have a hard time absorbing vitamin B-12 from food, the USDAʼs 2010 Dietary Guidelines for Americans recommend that people over age 50 incorporate foods fortified with B-12 or supplements in their diets.

The subjects in this study were mostly Caucasian women who had earned at least a high school diploma. The authors said future research might include more diverse populations and explore whether vitamin B-12 status affects particular cognitive skills.

This article first appeared in the Summer 2013 issue of Tufts Nutrition magazine. 

Filed under vitamin B-12 B-12 deficiency cognitive decline dementia neuroscience science

83 notes

Finally mapped: The brain region that distinguishes bits from bounty

In comparing amounts of things — be it the grains of sand on a beach, or the size of a sea gull flock inhabiting it — humans use a part of the brain that is organized topographically, researchers have finally shown. In other words, the neurons that work to make this “numerosity” assessment are laid out in a shape that allows those most closely related to communicate and interact over the shortest possible distance.

image

This layout, referred to as a topographical map, is characteristic of all primary senses — sight, hearing, touch, smell and taste — and scientists have long assumed that numerosity, while not a primary sense (but perceived similarly to one), might be characterized by such a map, too.

But they have not been able to find it, which has caused some doubt in the field as to whether a map for numerosity exists.

Now, however, Utrecht University’s Benjamin Harvey, along with his colleagues, have sussed out signals that illustrate the hypothesized numerosity map is real.

Numerosity, it is important to note, is distinct from symbolic numbers. “We use symbolic numbers to represent numerosity and other aspects of magnitude, but the symbol itself is only a representation,” Harvey said. He went on to explain that numerosity selectivity in the brain is derived from visual processing of image features, where symbolic number selectivity is derived by recognizing the shapes of numerals, written words, and linguistic sounds that represent numbers. “This latter task relies on very different parts of the brain that specialize in written and spoken language.”

Understanding whether the brain’s processing of numerosity and symbolic numbers is related, as we might be tempted to think, is just one area that will be better informed by Harvey’s new map.

To uncover it, he and his colleagues asked eight adult study participants to look at patterns of dots that varied in number over time, all the while analysing the neural response properties in a numerosity-linked part of their brain using high-field fMRI (functional magnetic resonance imaging). Use of this advanced neuroimaging method allowed them to scan the subjects for far fewer hours per sitting than would have been required with a less powerful scanning technology.

With the fMRI data that resulted, Harvey and his team used population receptive field modelling, which aims to measure neural response as directly and quantitatively as possible. “This was the key to our success,” Harvey said. It allowed the researchers to model the human fMRI response properties they observed following results of recordings from macaque neurons, in which numerosity experiments had been conducted more extensively.

Their efforts revealed a topographical layout of numerosity in the human brain; the small quantities of dots the participants observed were encoded by neurons in one part of the brain, and the larger quantities, in another.

This finding demonstrates that topography can emerge not just for lower-level cognitive functions, like the primary senses, but for higher-level cognitive functions, too.

"We are very excited that association cortex can produce emergent topographic structures," Harvey said.

Because scientists know a great deal about topographical maps (and have the tools to probe them), the work of Harvey et al. may help scientists better analyse the neural computation underlying number processing.

"We believe this will lead to a much more complete understanding of humans’ unique numerical and mathematical skills," Harvey said.

Having heard from others in the field about the difficulty associated with the hunt for a topographical map of numerosity, Harvey and colleagues were surprised to obtain the results they did.

They also found the variations between their subjects interesting.

"Every individual brain is a complex and very different system," Harvey explained. "I was very surprised then that the map we report is in such a consistent location between our subjects, and that numerosity preferences always increased in the same direction along the cortex."

"On the other hand," he continued, "the extent of individual differences … is also striking." Harvey explained that understanding the consequences of these differences for their subjects’ perception or task performance will require further study.

(Source: eurekalert.org)

Filed under numerosity parietal cortex topographical map neuroimaging neuroscience science

55 notes

Salk scientists and colleagues discover important mechanism underlying Alzheimer’s disease

Details of destructive neuronal pathway should help improve drug therapies

Alzheimer’s disease affects more than 26 million people worldwide. It is predicted to skyrocket as boomers age—nearly 106 million people are projected to have the disease by 2050. Fortunately, scientists are making progress towards therapies. A collaboration among several research entities, including the Salk Institute and the Sanford-Burnham Medical Research Institute, has defined a key mechanism behind the disease’s progress, giving hope that a newly modified Alzheimer’s drug will be effective.

In a previous study in 2009, Stephen F. Heinemann, a professor in Salk’s Molecular Neurobiology Laboratory, found that a nicotinic receptor called Alpha7 may help trigger Alzheimer’s disease. “Previous studies exposed a possible interaction between Alpha-7 nicotinic receptors (α7Rs) with amyloid beta, the toxic protein found in the disease’s hallmark plaques,” says Gustavo Dziewczapolski, a staff researcher in Heinemann’s lab. “We showed for the first time, in vivo, that the binding of this two proteins, α7Rs and amyloid beta, provoke detrimental effects in mice similar to the symptoms observed in Alzheimer’s disease .”

Their experiments, published in The Journal of Neuroscience, with Dziewczapolski as first author, consisted in testing Alzheimer’s disease-induced mice with and without the gene for α7Rs. They found that while both types of mice developed plaques, only the ones with α7Rs showed the impairments associated with Alzheimer’s.

But that still left a key question: Why was the pairing deleterious?

In a recent paper in the Proceedings of the National Academy of Sciences, Heinemann and Dziewczapolski here at Salk with Juan Piña-Crespo, Sara Sanz-Blasco, Stuart A. Lipton of the Sanford-Burnham Medical Research Institute and their collaborators announced they had found the answer in unexpected interactions among neurons and other brain cells.

Neurons communicate by sending electrical and chemical signals to each other across gaps called synapses. The biochemical mix at synapses resembles a major airport on a holiday weekend—it’s crowded, complicated and exquisitely sensitive to increases and decreases in traffic. One of these signaling chemicals is glutamate, an excitatory neurotransmitter, which is essential for learning and storing memories. In the right balance, glutamate is part of the normal functioning of neuronal synapses. But neurons are not the only cells in the brain capable of releasing glutamate. Astrocytes, once thought to be merely cellular glue between neurons, also release this neurotransmitter.

In this new understanding of Alzheimer’s disease, there is a cellular signaling cascade, in which amyloid beta stimulates the alpha 7 nicotine receptors, which trigger astrocytes to release additional glutamate into the synapse, overwhelming it with excitatory (“go”) signals.

This release in turn activates another set of receptors outside of the synapse, called extrasynaptic-N-methyl-D-aspartate receptors (eNMDARs) that depress synaptic activity. Unfortunately, the eNMDARs seem to overly depress synaptic function, leading to the memory loss and confusion associated with Alzheimer’s.

Now that the team has finally determined the steps in this destructive pathway, the good news is that a drug developed by the Lipton’s Laboratory called NitroMemantine, a modification of the earlier Alzheimer’s medication, Memantine, may block the entry of eNMDARs into the cascade.

"Thanks to the joint effort of our colleagues and collaborators, we seem to finally have a clear mechanistic link between a key target of the amyloid beta in the brain, the Alpha7 nicotinic receptors, triggering downstream harmful effects associated with the initiation and progression of Alzheimer’s disease," says Dziewczapolski. "This is a clear demonstration of the value of basic biomedical research. Drug development cannot proceed without knowing the details of interactions at the molecular and cellular level. Our research revealed two potential targets, α7Rs and eNMDARs, for future disease-modifying therapeutics, which Dr. Heinemann and I both hope will translate in a better treatment for Alzheimer’s patients."

(Source: salk.edu)

Filed under alzheimer's disease amyloid beta nicotine receptors eNMDARs neuroscience science

51 notes

Shout now! ‒ How Nerve Cells Initiate Voluntary Calls

University of Tübingen neuroscientists show that monkeys can decide to call out or keep silent

image

“Should I say something or not?” Human beings are not alone in pondering this dilemma – animals also face decisions when they communicate by voice. University of Tübingen neurobiologists Dr. Steffen Hage and Professor Andreas Nieder have now demonstrated that nerve cells in the brain signal the targeted initiation of calls – forming the basis of voluntary vocal expression. Their results are published in “Nature Communications.”

When we speak, we use the sounds we make for a specific purpose – we intentionally say what we think, or consciously withhold information. Animals, however, usually make sounds according to what they feel at that moment. Even our closest relations among the primates make sounds as a reflex based on their mood. Now, Tübingen neuroscientists have shown that rhesus monkeys are able to call (or be silent) on command. They can instrumentalize the sounds they make in a targeted way, an important behavioral ability which we also use to put language to a purpose.

To find out how the neural cells in the brain catalyse the production of controled vocal noises, the researchers taught rhesus monkeys to call out quickly when a spot appeared on a computer screen. While the monkeys solved puzzles, measurements taken in their prefrontal cortex revealed astonishing reactions in the cells there. The nerve cells became active whenever the monkey saw the spot of light which was the instruction to call out. But if the monkey simply called out spontaneously, these nerve cells were not activated. The cells therefore did not signaled for just any vocalisation – only for calls that the monkey actively decided to make.

The results published in “Nature Communications” provide valuable insights into the neurobiological foundations of vocalization. “We want to understand the physiological mechanisms in the brain which lead to the voluntary production of calls,” says Dr. Steffen Hage of the Institute for Neurobiology, “because it played a key role in the evolution of human ability to use speech.” The study offers important indicators of the function of part of the brain which in humans has developed into one of the central locations for controlling speech. “Disorders in this part of the human brain lead to severe speech disorders or even complete loss of speech in the patient,” Professor Andreas Nieder explains. The results – giving insights into how the production of sound is initiated – may help us better understand speech disorders.

(Source: uni-tuebingen.de)

Filed under speech production vocalizations primates nerve cells Broca's area neuroscience science

free counters